INFO: task kworker/0:5:4416 blocked for more than 143 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:kworker/0:5 state:D stack:24384 pid: 4416 ppid: 2 flags:0x00004000 Workqueue: events_long flush_mdb Call Trace: context_switch kernel/sched/core.c:5030 [inline] __schedule+0x11b8/0x43b0 kernel/sched/core.c:6376 schedule+0x11b/0x1e0 kernel/sched/core.c:6459 io_schedule+0x7c/0xd0 kernel/sched/core.c:8484 bit_wait_io+0xd/0xc0 kernel/sched/wait_bit.c:209 __wait_on_bit_lock+0xbc/0x1a0 kernel/sched/wait_bit.c:90 out_of_line_wait_on_bit_lock+0x11f/0x160 kernel/sched/wait_bit.c:117 lock_buffer include/linux/buffer_head.h:402 [inline] hfs_mdb_commit+0x113/0x1110 fs/hfs/mdb.c:271 process_one_work+0x863/0x1000 kernel/workqueue.c:2310 worker_thread+0xaa8/0x12a0 kernel/workqueue.c:2457 kthread+0x436/0x520 kernel/kthread.c:334 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287 INFO: task syz-executor:5239 blocked for more than 143 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor state:D stack:21856 pid: 5239 ppid: 1 flags:0x00004002 Call Trace: context_switch kernel/sched/core.c:5030 [inline] __schedule+0x11b8/0x43b0 kernel/sched/core.c:6376 schedule+0x11b/0x1e0 kernel/sched/core.c:6459 io_schedule+0x7c/0xd0 kernel/sched/core.c:8484 bit_wait_io+0xd/0xc0 kernel/sched/wait_bit.c:209 __wait_on_bit_lock+0xbc/0x1a0 kernel/sched/wait_bit.c:90 out_of_line_wait_on_bit_lock+0x11f/0x160 kernel/sched/wait_bit.c:117 lock_buffer include/linux/buffer_head.h:402 [inline] hfs_mdb_commit+0xc57/0x1110 fs/hfs/mdb.c:325 hfs_sync_fs+0x11/0x20 fs/hfs/super.c:35 sync_filesystem+0xe6/0x220 fs/sync.c:56 generic_shutdown_super+0x6b/0x300 fs/super.c:448 kill_block_super+0x7c/0xe0 fs/super.c:1427 deactivate_locked_super+0x93/0xf0 fs/super.c:335 cleanup_mnt+0x418/0x4d0 fs/namespace.c:1139 task_work_run+0x125/0x1a0 kernel/task_work.c:188 exit_task_work include/linux/task_work.h:33 [inline] do_exit+0x61e/0x20a0 kernel/exit.c:883 do_group_exit+0x12e/0x300 kernel/exit.c:997 __do_sys_exit_group kernel/exit.c:1008 [inline] __se_sys_exit_group kernel/exit.c:1006 [inline] __x64_sys_exit_group+0x3b/0x40 kernel/exit.c:1006 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x66/0xd0 RIP: 0033:0x7f01c7da9719 RSP: 002b:00007ffd5dfa0bd8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7 RAX: ffffffffffffffda RBX: 00007f01c7e1c1f1 RCX: 00007f01c7da9719 RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000001 RBP: 0000000000000002 R08: 00007ffd5df9e977 R09: 00007ffd5dfa1e90 R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffd5dfa1e90 R13: 00007f01c7e1c1cc R14: 0000000000019bad R15: 00007ffd5dfa2f50 Showing all locks held in the system: 1 lock held by khungtaskd/27: #0: ffffffff8c11c320 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30 2 locks held by kworker/0:2/1108: #0: ffff888016871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc90004ef7d00 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 3 locks held by kworker/0:3/1324: #0: ffff888016870938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc900058a7d00 (fqdir_free_work){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 #2: ffffffff8c120cb0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0xa1/0x4b0 kernel/rcu/tree.c:4043 2 locks held by getty/3957: #0: ffff88814cdcf098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:252 #1: ffffc90002d032e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x5ba/0x1a30 drivers/tty/n_tty.c:2158 3 locks held by kworker/1:7/4264: #0: ffff888016870938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc9000307fd00 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 #2: ffff88807e479240 (&data->fib_lock){+.+.}-{3:3}, at: nsim_fib_event_work+0x271/0x3240 drivers/net/netdevsim/fib.c:1480 2 locks held by kworker/1:15/4407: #0: ffff888016871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc900031afd00 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 2 locks held by kworker/1:20/4414: #0: ffff888016871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc90002f7fd00 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 2 locks held by kworker/0:5/4416: #0: ffff888016871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc900031ffd00 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 3 locks held by kworker/u4:5/4893: #0: ffff8880169cd938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc900030afd00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 #2: ffffffff8c120cb0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0xa1/0x4b0 kernel/rcu/tree.c:4043 1 lock held by syz-executor/5239: #0: ffff88805d40a0e0 (&type->s_umount_key#52){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:365 2 locks held by kworker/1:21/5296: #0: ffff888016871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc900032efd00 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 2 locks held by kworker/1:22/5297: #0: ffff888016871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc9000336fd00 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 1 lock held by syz-executor/5534: #0: ffff888079faa0e0 (&type->s_umount_key#52){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:365 1 lock held by syz-executor/5819: #0: ffff88805d21c0e0 (&type->s_umount_key#52){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:365 2 locks held by kworker/0:6/6068: #0: ffff888016871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc9000360fd00 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 1 lock held by syz-executor/6112: #0: ffff88802a41c0e0 (&type->s_umount_key#52){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:365 2 locks held by kworker/0:7/6306: #0: ffff888016871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc900035afd00 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 1 lock held by syz-executor/6403: #0: ffff888078b0a0e0 (&type->s_umount_key#52){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:365 1 lock held by syz-executor/6699: #0: ffff888022a9e0e0 (&type->s_umount_key#52){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:365 1 lock held by syz-executor/7001: #0: ffff88807652e0e0 (&type->s_umount_key#52){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:365 2 locks held by kworker/0:10/7029: #0: ffff888016871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc9000338fd00 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 1 lock held by syz-executor/7304: #0: ffff88807d4760e0 (&type->s_umount_key#52){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:365 1 lock held by syz-executor/7607: #0: ffff8880759600e0 (&type->s_umount_key#52){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:365 2 locks held by kworker/0:13/7741: #0: ffff888016871138 ((wq_completion)events_long){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc9000385fd00 ((work_completion)(&(&sbi->mdb_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 1 lock held by syz-executor/7913: #0: ffff888023f0c0e0 (&type->s_umount_key#52){+.+.}-{3:3}, at: deactivate_super+0xa0/0xd0 fs/super.c:365 2 locks held by kworker/0:16/7942: #0: ffff888016872138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x760/0x1000 kernel/workqueue.c:-1 #1: ffffc9000362fd00 ((work_completion)(&rew.rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a3/0x1000 kernel/workqueue.c:2285 2 locks held by dhcpcd/8197: #0: ffff88807a98c120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1694 [inline] #0: ffff88807a98c120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x35/0xce0 net/packet/af_packet.c:3213 #1: ffffffff8c120da8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:290 [inline] #1: ffffffff8c120da8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x347/0x6b0 kernel/rcu/tree_exp.h:845 2 locks held by dhcpcd/8198: #0: ffff88801edbe120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: lock_sock include/net/sock.h:1694 [inline] #0: ffff88801edbe120 (sk_lock-AF_PACKET){+.+.}-{0:0}, at: packet_do_bind+0x35/0xce0 net/packet/af_packet.c:3213 #1: ffffffff8c120da8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:322 [inline] #1: ffffffff8c120da8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x320/0x6b0 kernel/rcu/tree_exp.h:845 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 27 Comm: khungtaskd Not tainted syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025 Call Trace: dump_stack_lvl+0x168/0x230 lib/dump_stack.c:106 nmi_cpu_backtrace+0x397/0x3d0 lib/nmi_backtrace.c:111 nmi_trigger_cpumask_backtrace+0x163/0x280 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:212 [inline] watchdog+0xe0f/0xe50 kernel/hung_task.c:369 kthread+0x436/0x520 kernel/kthread.c:334 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:287 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline] NMI backtrace for cpu 0 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline] NMI backtrace for cpu 0 skipped: idling at default_idle+0xb/0x10 arch/x86/kernel/process.c:728