syzbot


INFO: task hung in vfs_unlink (3)

Status: auto-obsoleted due to no activity on 2023/04/30 04:19
Subsystems: ext4
[Documentation on labels]
First crash: 444d, last: 444d
Similar bugs (13)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 INFO: task hung in vfs_unlink (3) 2 18d 30d 0/3 upstream: reported on 2024/03/19 12:40
linux-6.1 INFO: task hung in vfs_unlink (2) 1 147d 147d 0/3 auto-obsoleted due to no activity on 2024/03/02 11:45
linux-4.14 INFO: task hung in vfs_unlink (2) 1 1095d 1095d 0/1 auto-closed as invalid on 2021/08/17 05:41
linux-4.14 INFO: task hung in vfs_unlink 8 1340d 1582d 0/1 auto-closed as invalid on 2020/12/15 00:37
linux-4.19 INFO: task hung in vfs_unlink (4) 1 526d 526d 0/1 auto-obsoleted due to no activity on 2023/03/09 01:26
linux-4.19 INFO: task hung in vfs_unlink (2) 2 1349d 1411d 0/1 auto-closed as invalid on 2020/12/05 18:54
linux-5.15 INFO: task hung in vfs_unlink 29 31d 388d 0/3 upstream: reported on 2023/03/26 16:46
upstream INFO: task hung in vfs_unlink ext4 32 1362d 1639d 0/26 auto-closed as invalid on 2020/11/23 01:14
linux-6.1 INFO: task hung in vfs_unlink 2 336d 347d 0/3 auto-obsoleted due to no activity on 2023/08/26 02:49
linux-4.19 INFO: task hung in vfs_unlink 6 1540d 1665d 0/1 auto-closed as invalid on 2020/05/28 17:30
linux-4.19 INFO: task hung in vfs_unlink (3) 1 988d 988d 0/1 auto-closed as invalid on 2021/12/01 23:35
upstream INFO: task hung in vfs_unlink (4) fs 6 183d 343d 0/26 auto-obsoleted due to no activity on 2024/01/16 15:08
upstream INFO: task hung in vfs_unlink (2) fs 1 621d 621d 0/26 auto-closed as invalid on 2022/10/05 08:13

Sample crash report:
INFO: task syz-executor.2:14689 blocked for more than 143 seconds.
      Not tainted 6.2.0-rc5-syzkaller-00221-gab072681eabe #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.2  state:D stack:27752 pid:14689 ppid:5087   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5293 [inline]
 __schedule+0xb8a/0x5450 kernel/sched/core.c:6606
 schedule+0xde/0x1b0 kernel/sched/core.c:6682
 rwsem_down_write_slowpath+0x600/0x12e0 kernel/locking/rwsem.c:1190
 __down_write_common kernel/locking/rwsem.c:1305 [inline]
 __down_write_common kernel/locking/rwsem.c:1302 [inline]
 __down_write kernel/locking/rwsem.c:1314 [inline]
 down_write+0x1e8/0x220 kernel/locking/rwsem.c:1563
 inode_lock include/linux/fs.h:756 [inline]
 vfs_unlink+0xd9/0x930 fs/namei.c:4241
 do_unlinkat+0x3b7/0x640 fs/namei.c:4320
 do_coredump+0x10b4/0x3c50 fs/coredump.c:673
 get_signal+0x1c03/0x2450 kernel/signal.c:2845
 arch_do_signal_or_restart+0x79/0x5c0 arch/x86/kernel/signal.c:306
 exit_to_user_mode_loop kernel/entry/common.c:168 [inline]
 exit_to_user_mode_prepare+0x15f/0x250 kernel/entry/common.c:203
 irqentry_exit_to_user_mode+0x9/0x40 kernel/entry/common.c:309
 asm_exc_general_protection+0x26/0x30 arch/x86/include/asm/idtentry.h:564
RIP: 0033:0x7f6def68c0d1
RSP: 002b:0000000020000430 EFLAGS: 00010217
RAX: 0000000000000000 RBX: 00007f6def7ac050 RCX: 00007f6def68c0c9
RDX: 0000000020000440 RSI: 0000000020000430 RDI: 000000000c902000
RBP: 00007f6def6e7ae9 R08: 00000000200004c0 R09: 00000000200004c0
R10: 0000000020000480 R11: 0000000000000206 R12: 0000000000000000
R13: 00007ffd904db16f R14: 00007f6df0440300 R15: 0000000000022000
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/u4:0/9:
 #0: ffff8880b983b598 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2f/0x120 kernel/sched/core.c:537
 #1: ffffc900000e7da8 ((work_completion)(&(&bat_priv->nc.work)->work)){+.+.}-{0:0}, at: process_one_work+0x8a1/0x1710 kernel/workqueue.c:2264
 #2: ffff8880b9929618 (&base->lock){-.-.}-{2:2}, at: lock_timer_base+0x5a/0x1f0 kernel/time/timer.c:999
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8c790fb0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x26/0xc70 kernel/rcu/tasks.h:507
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8c790cb0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x26/0xc70 kernel/rcu/tasks.h:507
1 lock held by khungtaskd/28:
 #0: ffffffff8c791b00 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x57/0x264 kernel/locking/lockdep.c:6494
1 lock held by klogd/4416:
 #0: ffff8880b993b598 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2f/0x120 kernel/sched/core.c:537
2 locks held by getty/4742:
 #0: ffff88814b7d8098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x26/0x80 drivers/tty/tty_ldisc.c:244
 #1: ffffc900015a02f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xef4/0x13e0 drivers/tty/n_tty.c:2177
2 locks held by syz-fuzzer/5185:
 #0: ffff88805dab94e8 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0xe7/0x100 fs/file.c:1046
 #1: ffff888030dac030 (&type->i_mutex_dir_key#3){++++}-{3:3}, at: iterate_dir+0xd1/0x6f0 fs/readdir.c:55
3 locks held by kworker/u4:7/7408:
 #0: ffff888140153938 ((wq_completion)writeback){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888140153938 ((wq_completion)writeback){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888140153938 ((wq_completion)writeback){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1280 [inline]
 #0: ffff888140153938 ((wq_completion)writeback){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:636 [inline]
 #0: ffff888140153938 ((wq_completion)writeback){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:663 [inline]
 #0: ffff888140153938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x86d/0x1710 kernel/workqueue.c:2260
 #1: ffffc90002f8fda8 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8a1/0x1710 kernel/workqueue.c:2264
 #2: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581
2 locks held by kworker/1:3/12701:
 #0: ffff8880b993b598 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2f/0x120 kernel/sched/core.c:537
 #1: ffff8880b99287c8 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x2de/0x930 kernel/sched/psi.c:976
3 locks held by syz-executor.3/14660:
 #0: ffff88814b860460 (sb_writers#4){.+.+}-{0:0}, at: get_signal+0x1c03/0x2450 kernel/signal.c:2845
 #1: ffff888081867258 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
 #1: ffff888081867258 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_buffered_write_iter+0xb0/0x460 fs/ext4/file.c:279
 #2: ffff88814b8600e0 (&type->s_umount_key#32){++++}-{3:3}, at: try_to_writeback_inodes_sb+0x21/0xc0 fs/fs-writeback.c:2684
3 locks held by syz-executor.2/14676:
 #0: ffff88814b860460 (sb_writers#4){.+.+}-{0:0}, at: get_signal+0x1c03/0x2450 kernel/signal.c:2845
 #1: ffff8880816bca38 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
 #1: ffff8880816bca38 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_buffered_write_iter+0xb0/0x460 fs/ext4/file.c:279
 #2: ffff88814b8600e0 (&type->s_umount_key#32){++++}-{3:3}, at: try_to_writeback_inodes_sb+0x21/0xc0 fs/fs-writeback.c:2684
3 locks held by syz-executor.2/14678:
 #0: ffff88814b860460 (sb_writers#4){.+.+}-{0:0}, at: get_signal+0x1c03/0x2450 kernel/signal.c:2845
 #1: ffff8880816bc030 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
 #1: ffff8880816bc030 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_buffered_write_iter+0xb0/0x460 fs/ext4/file.c:279
 #2: ffff88814b8600e0 (&type->s_umount_key#32){++++}-{3:3}, at: try_to_writeback_inodes_sb+0x21/0xc0 fs/fs-writeback.c:2684
3 locks held by syz-executor.2/14688:
 #0: ffff88814b860460 (sb_writers#4){.+.+}-{0:0}, at: get_signal+0x1c03/0x2450 kernel/signal.c:2845
 #1: ffff888030da8400 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
 #1: ffff888030da8400 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: ext4_buffered_write_iter+0xb0/0x460 fs/ext4/file.c:279
 #2: ffff88814b8600e0 (&type->s_umount_key#32){++++}-{3:3}, at: try_to_writeback_inodes_sb+0x21/0xc0 fs/fs-writeback.c:2684
3 locks held by syz-executor.2/14689:
 #0: ffff88814b860460 (sb_writers#4){.+.+}-{0:0}, at: do_unlinkat+0x183/0x640 fs/namei.c:4299
 #1: ffff888030dac030 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:791 [inline]
 #1: ffff888030dac030 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: do_unlinkat+0x270/0x640 fs/namei.c:4303
 #2: ffff888030da8400 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
 #2: ffff888030da8400 (&sb->s_type->i_mutex_key#8){++++}-{3:3}, at: vfs_unlink+0xd9/0x930 fs/namei.c:4241
1 lock held by syz-executor.1/15835:
 #0: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581
1 lock held by syz-executor.5/15840:
 #0: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581
1 lock held by syz-executor.5/15862:
 #0: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581
1 lock held by syz-executor.1/15865:
 #0: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581
1 lock held by syz-executor.1/15869:
 #0: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581
2 locks held by syz-executor.3/15895:
 #0: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581
 #1: ffff88814b864990 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0xfb4/0x14a0 fs/jbd2/transaction.c:461
2 locks held by syz-executor.3/15896:
 #0: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581
 #1: ffff88814b864990 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0xfb4/0x14a0 fs/jbd2/transaction.c:461
2 locks held by syz-executor.4/15902:
 #0: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581
 #1: ffff88814b864990 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0xfb4/0x14a0 fs/jbd2/transaction.c:461
2 locks held by syz-executor.5/15904:
 #0: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581
 #1: ffff88814b864990 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0xfb4/0x14a0 fs/jbd2/transaction.c:461
1 lock held by syz-executor.5/15907:
 #0: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581
2 locks held by syz-executor.3/15909:
 #0: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581
 #1: ffff88814b864990 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0xfb4/0x14a0 fs/jbd2/transaction.c:461
1 lock held by syz-executor.3/15912:
 #0: ffff88814b862b98 (&sbi->s_writepages_rwsem){.+.+}-{0:0}, at: do_writepages+0x1af/0x690 mm/page-writeback.c:2581

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted 6.2.0-rc5-syzkaller-00221-gab072681eabe #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/12/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xd1/0x138 lib/dump_stack.c:106
 nmi_cpu_backtrace.cold+0x24/0x18a lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x333/0x3c0 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xc75/0xfc0 kernel/hung_task.c:377
 kthread+0x2e8/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 60 Comm: kworker/u4:4 Not tainted 6.2.0-rc5-syzkaller-00221-gab072681eabe #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/12/2023
Workqueue: bat_events batadv_nc_worker
RIP: 0010:__kasan_check_read+0x8/0x20 mm/kasan/shadow.c:31
Code: 0f 0b 48 83 c4 60 5b 5d 41 5c c3 48 05 00 80 00 00 48 89 fb 48 39 c7 0f 82 06 bd f2 07 eb e1 0f 1f 00 f3 0f 1e fa 48 8b 0c 24 <89> f6 31 d2 e9 4f f7 ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 0f 1f
RSP: 0018:ffffc900015b79a0 EFLAGS: 00000013
RAX: 000000000000001a RBX: 00000000000006ba RCX: ffffffff8163b1ee
RDX: 1ffff110030aac4d RSI: 0000000000000008 RDI: ffffffff91339b90
RBP: 0000000000000000 R08: 0000000000000000 R09: ffffffff91339b97
R10: fffffbfff2267372 R11: 0000000000000000 R12: ffff888018556248
R13: ffff8880185557c0 R14: ffff8880185561f8 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fd481442310 CR3: 00000000281d9000 CR4: 0000000000350ee0
Call Trace:
 <TASK>
 instrument_atomic_read include/linux/instrumented.h:72 [inline]
 _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
 hlock_class kernel/locking/lockdep.c:227 [inline]
 lookup_chain_cache_add kernel/locking/lockdep.c:3743 [inline]
 validate_chain kernel/locking/lockdep.c:3799 [inline]
 __lock_acquire+0x166e/0x56d0 kernel/locking/lockdep.c:5055
 lock_acquire kernel/locking/lockdep.c:5668 [inline]
 lock_acquire+0x1e3/0x630 kernel/locking/lockdep.c:5633
 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
 _raw_spin_lock_bh+0x33/0x40 kernel/locking/spinlock.c:178
 spin_lock_bh include/linux/spinlock.h:355 [inline]
 batadv_nc_purge_paths+0xdf/0x3a0 net/batman-adv/network-coding.c:442
 batadv_nc_worker+0x8fd/0xfa0 net/batman-adv/network-coding.c:720
 process_one_work+0x9bf/0x1710 kernel/workqueue.c:2289
 worker_thread+0x669/0x1090 kernel/workqueue.c:2436
 kthread+0x2e8/0x3a0 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/01/30 04:13 upstream ab072681eabe 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in vfs_unlink
* Struck through repros no longer work on HEAD.