syzbot


INFO: task hung in vfs_link (2)

Status: auto-obsoleted due to no activity on 2025/10/26 05:31
Subsystems: bcachefs
[Documentation on labels]
First crash: 265d, last: 170d
Similar bugs (2)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in vfs_link 1 1 267d 267d 0/3 auto-obsoleted due to no activity on 2025/07/31 03:53
upstream INFO: task hung in vfs_link bcachefs 1 5 374d 402d 0/29 auto-obsoleted due to no activity on 2025/04/05 19:52

Sample crash report:
INFO: task syz.2.310:8774 blocked for more than 143 seconds.
      Not tainted 6.16.0-rc7-syzkaller-00142-gb711733e89a3 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.310       state:D stack:28152 pid:8774  tgid:8686  ppid:7890   task_flags:0x400040 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5397 [inline]
 __schedule+0x16aa/0x4c90 kernel/sched/core.c:6786
 __schedule_loop kernel/sched/core.c:6864 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6879
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6936
 rwsem_down_write_slowpath+0xbec/0x1030 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1ab/0x1f0 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:869 [inline]
 vfs_link+0x3b4/0x6e0 fs/namei.c:4854
 do_linkat+0x272/0x560 fs/namei.c:4933
 __do_sys_link fs/namei.c:4967 [inline]
 __se_sys_link fs/namei.c:4965 [inline]
 __x64_sys_link+0x82/0x90 fs/namei.c:4965
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb86838e9a9
RSP: 002b:00007fb86915f038 EFLAGS: 00000246 ORIG_RAX: 0000000000000056
RAX: ffffffffffffffda RBX: 00007fb8685b6080 RCX: 00007fb86838e9a9
RDX: 0000000000000000 RSI: 0000200000000300 RDI: 0000200000000200
RBP: 00007fb868410d69 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007fb8685b6080 R15: 00007ffd2d93e618
 </TASK>
INFO: task syz.2.310:8776 blocked for more than 144 seconds.
      Not tainted 6.16.0-rc7-syzkaller-00142-gb711733e89a3 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.310       state:D stack:26984 pid:8776  tgid:8686  ppid:7890   task_flags:0x400040 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5397 [inline]
 __schedule+0x16aa/0x4c90 kernel/sched/core.c:6786
 __schedule_loop kernel/sched/core.c:6864 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6879
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6936
 rwsem_down_write_slowpath+0xbec/0x1030 kernel/locking/rwsem.c:1176
 __down_write_common kernel/locking/rwsem.c:1304 [inline]
 __down_write kernel/locking/rwsem.c:1313 [inline]
 down_write+0x1ab/0x1f0 kernel/locking/rwsem.c:1578
 inode_lock include/linux/fs.h:869 [inline]
 process_measurement+0x3d8/0x1a40 security/integrity/ima/ima_main.c:260
 ima_file_check+0xd7/0x120 security/integrity/ima/ima_main.c:613
 security_file_post_open+0xbb/0x290 security/security.c:3130
 do_open fs/namei.c:3898 [inline]
 path_openat+0x2f26/0x3830 fs/namei.c:4055
 do_filp_open+0x1fa/0x410 fs/namei.c:4082
 do_sys_openat2+0x121/0x1c0 fs/open.c:1437
 do_sys_open fs/open.c:1452 [inline]
 __do_sys_openat fs/open.c:1468 [inline]
 __se_sys_openat fs/open.c:1463 [inline]
 __x64_sys_openat+0x138/0x170 fs/open.c:1463
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb86838e9a9
RSP: 002b:00007fb86913e038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007fb8685b6160 RCX: 00007fb86838e9a9
RDX: 0000000000141042 RSI: 0000200000000080 RDI: ffffffffffffff9c
RBP: 00007fb868410d69 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007fb8685b6160 R15: 00007ffd2d93e618
 </TASK>

Showing all locks held in the system:
4 locks held by kworker/u8:0/12:
2 locks held by ksoftirqd/0/15:
 #0: ffff8880b8639e18 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:606
 #1: ffff8880b8623f08 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x39e/0x6d0 kernel/sched/psi.c:987
1 lock held by khungtaskd/31:
 #0: ffffffff8e13f0e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e13f0e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8e13f0e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6770
3 locks held by kworker/0:2/925:
 #0: ffff88801a480d48 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a480d48 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc900038e7bc0 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc900038e7bc0 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
 #2: ffffffff8f509f08 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
3 locks held by kworker/u8:6/1147:
 #0: ffff88801a489148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a489148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc90003d5fbc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90003d5fbc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
 #2: ffffffff8f509f08 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
5 locks held by kworker/u8:7/1150:
1 lock held by dhcpcd/5501:
 #0: ffffffff8f509f08 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f509f08 (rtnl_mutex){+.+.}-{4:4}, at: inet6_rtm_deladdr+0x20f/0x330 net/ipv6/addrconf.c:4798
2 locks held by getty/5600:
 #0: ffff88803094d0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000333b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
5 locks held by syz.2.310/8687:
 #0: ffff88806498c428 (sb_writers#31){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff88805641b670 (&sb->s_type->i_mutex_key#38){++++}-{4:4}, at: inode_lock_killable include/linux/fs.h:874 [inline]
 #1: ffff88805641b670 (&sb->s_type->i_mutex_key#38){++++}-{4:4}, at: do_truncate+0x171/0x220 fs/open.c:63
 #2: ffff88806de00a50 (&c->snapshot_create_lock){.+.+}-{4:4}, at: bch2_truncate+0xeb/0x200 fs/bcachefs/io_misc.c:322
 #3: ffff88806de04398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:161 [inline]
 #3: ffff88806de04398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:253 [inline]
 #3: ffff88806de04398 (&c->btree_trans_barrier){.+.+}-{0:0}, at: bch2_trans_srcu_lock+0xaf/0x220 fs/bcachefs/btree_iter.c:3299
 #4: ffff88806de26710 (&c->gc_lock){.+.+}-{4:4}, at: bch2_btree_update_start+0x542/0x1de0 fs/bcachefs/btree_update_interior.c:1211
3 locks held by syz.2.310/8774:
 #0: ffff88806498c428 (sb_writers#31){.+.+}-{0:0}, at: mnt_want_write+0x41/0x90 fs/namespace.c:557
 #1: ffff88805641aed8 (&sb->s_type->i_mutex_key#38/1){+.+.}-{4:4}, at: inode_lock_nested include/linux/fs.h:914 [inline]
 #1: ffff88805641aed8 (&sb->s_type->i_mutex_key#38/1){+.+.}-{4:4}, at: filename_create+0x1f9/0x470 fs/namei.c:4148
 #2: ffff88805641b670 (&sb->s_type->i_mutex_key#38){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
 #2: ffff88805641b670 (&sb->s_type->i_mutex_key#38){++++}-{4:4}, at: vfs_link+0x3b4/0x6e0 fs/namei.c:4854
1 lock held by syz.2.310/8776:
 #0: ffff88805641b670 (&sb->s_type->i_mutex_key#38){++++}-{4:4}, at: inode_lock include/linux/fs.h:869 [inline]
 #0: ffff88805641b670 (&sb->s_type->i_mutex_key#38){++++}-{4:4}, at: process_measurement+0x3d8/0x1a40 security/integrity/ima/ima_main.c:260
2 locks held by syz-executor/11556:
 #0: ffffffff8f4fd310 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x304/0x4d0 net/core/net_namespace.c:570
 #1: ffffffff8f509f08 (rtnl_mutex){+.+.}-{4:4}, at: ip_tunnel_init_net+0x2ab/0x800 net/ipv4/ip_tunnel.c:1160
3 locks held by syz.5.588/11577:
 #0: ffff88807bcc8dc0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:499 [inline]
 #0: ffff88807bcc8dc0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x212/0x510 net/bluetooth/hci_core.c:2717
 #1: ffff88807bcc80b8 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x66a/0x1330 net/bluetooth/hci_sync.c:5282
 #2: ffffffff8f672108 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2071 [inline]
 #2: ffffffff8f672108 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xa1/0x230 net/bluetooth/hci_conn.c:2560
3 locks held by syz.0.586/11589:
 #0: ffff88807db00dc0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:499 [inline]
 #0: ffff88807db00dc0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x212/0x510 net/bluetooth/hci_core.c:2717
 #1: ffff88807db000b8 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x66a/0x1330 net/bluetooth/hci_sync.c:5282
 #2: ffffffff8f672108 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2071 [inline]
 #2: ffffffff8f672108 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xa1/0x230 net/bluetooth/hci_conn.c:2560
1 lock held by syz.8.590/11581:
 #0: ffffffff8e144bf8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline]
 #0: ffffffff8e144bf8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:998
4 locks held by syz.7.589/11582:
 #0: ffff888023c38dc0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:499 [inline]
 #0: ffff888023c38dc0 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x212/0x510 net/bluetooth/hci_core.c:2717
 #1: ffff888023c380b8 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x66a/0x1330 net/bluetooth/hci_sync.c:5282
 #2: ffffffff8f672108 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2071 [inline]
 #2: ffffffff8f672108 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xa1/0x230 net/bluetooth/hci_conn.c:2560
 #3: ffff888034db7b38 (&conn->lock#2){+.+.}-{4:4}, at: l2cap_conn_del+0x70/0x680 net/bluetooth/l2cap_core.c:1762

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.16.0-rc7-syzkaller-00142-gb711733e89a3 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:307 [inline]
 watchdog+0xfee/0x1030 kernel/hung_task.c:470
 kthread+0x70e/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 3496 Comm: kworker/u8:8 Not tainted 6.16.0-rc7-syzkaller-00142-gb711733e89a3 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Workqueue: bat_events batadv_nc_worker
RIP: 0010:native_save_fl arch/x86/include/asm/irqflags.h:26 [inline]
RIP: 0010:arch_local_save_flags arch/x86/include/asm/irqflags.h:109 [inline]
RIP: 0010:arch_local_irq_save arch/x86/include/asm/irqflags.h:127 [inline]
RIP: 0010:lock_acquire+0xc9/0x360 kernel/locking/lockdep.c:5867
Code: fe 10 85 c0 0f 85 eb 00 00 00 65 48 8b 04 25 08 90 9c 92 83 b8 ec 0a 00 00 00 0f 85 d5 00 00 00 48 c7 44 24 30 00 00 00 00 9c <8f> 44 24 30 4c 89 74 24 10 4d 89 fe 4c 8b 7c 24 30 fa 48 c7 c7 91
RSP: 0018:ffffc9000bb87978 EFLAGS: 00000246
RAX: ffff888030e65a00 RBX: 0000000000000000 RCX: c01f6b98f412ba00
RDX: 0000000000000000 RSI: ffffffff8b3455af RDI: 1ffffffff1c27e1c
RBP: ffffffff8b345592 R08: 0000000000000000 R09: 0000000000000000
R10: dffffc0000000000 R11: ffffffff8b3454c0 R12: 0000000000000002
R13: ffffffff8e13f0e0 R14: 0000000000000000 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff888125c57000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000555593a82720 CR3: 000000000df38000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 rcu_read_lock include/linux/rcupdate.h:841 [inline]
 batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:408 [inline]
 batadv_nc_worker+0xef/0x610 net/batman-adv/network-coding.c:719
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xade/0x17b0 kernel/workqueue.c:3321
 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3402
 kthread+0x70e/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (8):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/28 05:29 upstream b711733e89a3 fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_link
2025/07/24 14:30 upstream 25fae0b93d1d 65d60d73 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_link
2025/07/23 20:02 upstream 01a412d06bc5 e1dd4f22 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_link
2025/05/08 05:15 upstream 707df3375124 dbf35fa1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_link
2025/04/27 20:39 upstream 5bc1018675ec c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_link
2025/04/25 01:26 upstream e72e9e693307 9882047a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_link
2025/04/24 01:59 upstream a79be02bba5c 73a168d0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_link
2025/04/23 21:58 upstream a79be02bba5c 73a168d0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in vfs_link
* Struck through repros no longer work on HEAD.