INFO: task kworker/u4:13:3800 blocked for more than 143 seconds. Not tainted 6.0.0-rc6-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u4:13 state:D stack:26752 pid: 3800 ppid: 2 flags:0x00004000 Workqueue: events_unbound fsnotify_connector_destroy_workfn Call Trace: context_switch kernel/sched/core.c:5182 [inline] __schedule+0xadf/0x52b0 kernel/sched/core.c:6494 schedule+0xda/0x1b0 kernel/sched/core.c:6570 schedule_timeout+0x1db/0x2a0 kernel/time/timer.c:1911 do_wait_for_common kernel/sched/completion.c:85 [inline] __wait_for_common+0x1be/0x530 kernel/sched/completion.c:106 __synchronize_srcu+0x1f2/0x290 kernel/rcu/srcutree.c:1215 fsnotify_connector_destroy_workfn+0x49/0xa0 fs/notify/mark.c:208 process_one_work+0x991/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 kthread+0x2e4/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306 INFO: task kworker/u4:14:10542 blocked for more than 143 seconds. Not tainted 6.0.0-rc6-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/u4:14 state:D stack:26752 pid:10542 ppid: 2 flags:0x00004000 Workqueue: events_unbound fsnotify_mark_destroy_workfn Call Trace: context_switch kernel/sched/core.c:5182 [inline] __schedule+0xadf/0x52b0 kernel/sched/core.c:6494 schedule+0xda/0x1b0 kernel/sched/core.c:6570 schedule_timeout+0x1db/0x2a0 kernel/time/timer.c:1911 do_wait_for_common kernel/sched/completion.c:85 [inline] __wait_for_common+0x1be/0x530 kernel/sched/completion.c:106 __synchronize_srcu+0x1f2/0x290 kernel/rcu/srcutree.c:1215 fsnotify_mark_destroy_workfn+0xfd/0x3c0 fs/notify/mark.c:898 process_one_work+0x991/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 kthread+0x2e4/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306 INFO: task syz-executor.3:13855 blocked for more than 143 seconds. Not tainted 6.0.0-rc6-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.3 state:D stack:27808 pid:13855 ppid: 3705 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:5182 [inline] __schedule+0xadf/0x52b0 kernel/sched/core.c:6494 schedule+0xda/0x1b0 kernel/sched/core.c:6570 schedule_timeout+0x1db/0x2a0 kernel/time/timer.c:1911 do_wait_for_common kernel/sched/completion.c:85 [inline] __wait_for_common+0x1be/0x530 kernel/sched/completion.c:106 __flush_work+0x56c/0xb10 kernel/workqueue.c:3075 p9_mux_poll_stop net/9p/trans_fd.c:175 [inline] p9_conn_destroy net/9p/trans_fd.c:884 [inline] p9_fd_close+0x290/0x580 net/9p/trans_fd.c:920 p9_client_create+0x97a/0x1070 net/9p/client.c:1001 v9fs_session_init+0x1e2/0x1810 fs/9p/v9fs.c:408 v9fs_mount+0xba/0xc90 fs/9p/vfs_super.c:126 legacy_get_tree+0x105/0x220 fs/fs_context.c:610 vfs_get_tree+0x89/0x2f0 fs/super.c:1530 do_new_mount fs/namespace.c:3040 [inline] path_mount+0x1326/0x1e20 fs/namespace.c:3370 do_mount fs/namespace.c:3383 [inline] __do_sys_mount fs/namespace.c:3591 [inline] __se_sys_mount fs/namespace.c:3568 [inline] __x64_sys_mount+0x27f/0x300 fs/namespace.c:3568 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x35/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x63/0xcd RIP: 0033:0x7ff9c5489409 RSP: 002b:00007ff9c6695168 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5 RAX: ffffffffffffffda RBX: 00007ff9c559bf80 RCX: 00007ff9c5489409 RDX: 0000000020000040 RSI: 0000000020000080 RDI: 0000000000000000 RBP: 00007ff9c54e4367 R08: 0000000020000280 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffc2c1eadbf R14: 00007ff9c6695300 R15: 0000000000022000 Showing all locks held in the system: 1 lock held by rcu_tasks_kthre/12: #0: ffffffff8bf888b0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x26/0xc70 kernel/rcu/tasks.h:507 1 lock held by rcu_tasks_trace/13: #0: ffffffff8bf885b0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x26/0xc70 kernel/rcu/tasks.h:507 1 lock held by khungtaskd/28: #0: ffffffff8bf89400 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x53/0x260 kernel/locking/lockdep.c:6492 1 lock held by khugepaged/34: #0: ffffffff8c07f848 (lock#4){+.+.}-{3:3}, at: __lru_add_drain_all+0x62/0x7e0 mm/swap.c:830 3 locks held by kworker/1:2/141: 2 locks held by getty/3285: #0: ffff88814b47d098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x22/0x80 drivers/tty/tty_ldisc.c:244 #1: ffffc90002d162f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xef0/0x13e0 drivers/tty/n_tty.c:2177 3 locks held by kworker/0:7/3706: #0: ffff88814a573d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff88814a573d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff88814a573d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff88814a573d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline] #0: ffff88814a573d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline] #0: ffff88814a573d38 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260 #1: ffffc90003cffda8 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264 #2: ffffffff8d7b1128 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_verify_work+0xe/0x20 net/ipv6/addrconf.c:4624 2 locks held by kworker/u4:13/3800: #0: ffff888011869138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888011869138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888011869138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888011869138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline] #0: ffff888011869138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline] #0: ffff888011869138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260 #1: ffffc9000430fda8 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264 2 locks held by kworker/u4:14/10542: #0: ffff888011869138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline] #0: ffff888011869138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline] #0: ffff888011869138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline] #0: ffff888011869138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline] #0: ffff888011869138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline] #0: ffff888011869138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x87a/0x1610 kernel/workqueue.c:2260 #1: ffffc90002e2fda8 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work+0x8ae/0x1610 kernel/workqueue.c:2264 2 locks held by syz-executor.2/13908: #0: ffffffff8be5e348 (sched_core_mutex){+.+.}-{3:3}, at: sched_core_get+0x37/0xa0 kernel/sched/core.c:404 #1: ffffffff8bf940b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline] #1: ffffffff8bf940b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x562/0x670 kernel/rcu/tree_exp.h:940 1 lock held by syz-executor.1/13917: #0: ffffffff8be5e348 (sched_core_mutex){+.+.}-{3:3}, at: sched_core_get+0x37/0xa0 kernel/sched/core.c:404 2 locks held by syz-executor.3/13958: #0: ffffffff8d7b1128 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline] #0: ffffffff8d7b1128 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x3e5/0xca0 net/core/rtnetlink.c:6087 #1: ffffffff8bf940b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline] #1: ffffffff8bf940b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x24a/0x670 kernel/rcu/tree_exp.h:940 1 lock held by dhcpcd/13963: #0: ffff888075e3e130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1712 [inline] #0: ffff888075e3e130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x2f/0xdc0 net/packet/af_packet.c:3194 1 lock held by dhcpcd/13964: #0: ffff888074bc6130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1712 [inline] #0: ffff888074bc6130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x2f/0xdc0 net/packet/af_packet.c:3194 1 lock held by dhcpcd/13965: #0: ffff88801c094130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1712 [inline] #0: ffff88801c094130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x2f/0xdc0 net/packet/af_packet.c:3194 1 lock held by dhcpcd/13966: #0: ffff888024dd8130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1712 [inline] #0: ffff888024dd8130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x2f/0xdc0 net/packet/af_packet.c:3194 1 lock held by dhcpcd/13967: #0: ffff888024acc130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1712 [inline] #0: ffff888024acc130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x2f/0xdc0 net/packet/af_packet.c:3194 1 lock held by dhcpcd/13968: #0: ffff888047b76130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1712 [inline] #0: ffff888047b76130 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x2f/0xdc0 net/packet/af_packet.c:3194 1 lock held by syz-executor.3/13970: #0: ffffffff8d7b1128 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline] #0: ffffffff8d7b1128 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x3e5/0xca0 net/core/rtnetlink.c:6087 ============================================= NMI backtrace for cpu 0 CPU: 0 PID: 28 Comm: khungtaskd Not tainted 6.0.0-rc6-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0xcd/0x134 lib/dump_stack.c:106 nmi_cpu_backtrace.cold+0x46/0x14f lib/nmi_backtrace.c:111 nmi_trigger_cpumask_backtrace+0x206/0x250 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline] watchdog+0xc18/0xf50 kernel/hung_task.c:369 kthread+0x2e4/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 CPU: 1 PID: 141 Comm: kworker/1:2 Not tainted 6.0.0-rc6-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/26/2022 Workqueue: events p9_poll_workfn RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:29 [inline] RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:70 [inline] RIP: 0010:arch_local_irq_save arch/x86/include/asm/irqflags.h:106 [inline] RIP: 0010:lock_is_held_type+0x51/0x140 kernel/locking/lockdep.c:5705 Code: 82 76 85 c0 0f 85 ca 00 00 00 65 4c 8b 24 25 80 6f 02 00 41 8b 94 24 74 0a 00 00 85 d2 0f 85 b1 00 00 00 48 89 fd 41 89 f6 9c <8f> 04 24 fa 48 c7 c7 c0 ab ec 89 31 db e8 3d 15 00 00 41 8b 84 24 RSP: 0018:ffffc9000294fa98 EFLAGS: 00000046 RAX: 0000000000000000 RBX: 1ffff92000529f60 RCX: 0000000000000001 RDX: 0000000000000000 RSI: 00000000ffffffff RDI: ffffffff8bf89340 RBP: ffffffff8bf89340 R08: 0000000000000000 R09: ffffffff8ddedf17 R10: fffffbfff1bbdbe2 R11: 0000000000000000 R12: ffff88801b330000 R13: 00000000ffffffff R14: 00000000ffffffff R15: 0000000000000000 FS: 0000000000000000(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007f942ccac8c0 CR3: 000000000bc8e000 CR4: 0000000000350ee0 Call Trace: lock_is_held include/linux/lockdep.h:283 [inline] rcu_read_lock_sched_held+0x3a/0x70 kernel/rcu/update.c:125 trace_lock_acquire include/trace/events/lock.h:24 [inline] lock_acquire+0x480/0x570 kernel/locking/lockdep.c:5637 __raw_spin_lock_irq include/linux/spinlock_api_smp.h:119 [inline] _raw_spin_lock_irq+0x32/0x50 kernel/locking/spinlock.c:170 spin_lock_irq include/linux/spinlock.h:374 [inline] dma_buf_poll+0x234/0x700 drivers/dma-buf/dma-buf.c:282 vfs_poll include/linux/poll.h:88 [inline] p9_fd_poll+0x113/0x2c0 net/9p/trans_fd.c:233 p9_poll_mux net/9p/trans_fd.c:624 [inline] p9_poll_workfn+0x22b/0x4e0 net/9p/trans_fd.c:1147 process_one_work+0x991/0x1610 kernel/workqueue.c:2289 worker_thread+0x665/0x1080 kernel/workqueue.c:2436 kthread+0x2e4/0x3a0 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306