syzbot


INFO: task hung in lock_two_directories

Status: auto-obsoleted due to no activity on 2023/09/18 16:57
Subsystems: ext4 overlayfs
[Documentation on labels]
First crash: 633d, last: 588d
Similar bugs (1)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in lock_two_directories (2) kernfs 10 134d 282d 0/28 auto-obsoleted due to no activity on 2024/12/15 06:45

Sample crash report:
INFO: task syz-executor.3:18957 blocked for more than 143 seconds.
      Not tainted 6.4.0-rc7-syzkaller-00014-g692b7dc87ca6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.3  state:D stack:26912 pid:18957 ppid:5034   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5343 [inline]
 __schedule+0x187b/0x4900 kernel/sched/core.c:6669
 schedule+0xc3/0x180 kernel/sched/core.c:6745
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6804
 rwsem_down_write_slowpath+0xedd/0x13a0 kernel/locking/rwsem.c:1178
 __down_write_common+0x1aa/0x200 kernel/locking/rwsem.c:1306
 inode_lock_nested include/linux/fs.h:810 [inline]
 lock_two_directories+0x116/0x120 fs/namei.c:3032
 ovl_workdir_ok fs/overlayfs/super.c:981 [inline]
 ovl_get_workdir+0x211/0x17c0 fs/overlayfs/super.c:1416
 ovl_fill_super+0x1c64/0x2bd0 fs/overlayfs/super.c:1992
 mount_nodev+0x56/0xe0 fs/super.c:1426
 legacy_get_tree+0xef/0x190 fs/fs_context.c:610
 vfs_get_tree+0x8c/0x270 fs/super.c:1510
 do_new_mount+0x28f/0xae0 fs/namespace.c:3039
 do_mount fs/namespace.c:3382 [inline]
 __do_sys_mount fs/namespace.c:3591 [inline]
 __se_sys_mount+0x2d9/0x3c0 fs/namespace.c:3568
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f929708c389
RSP: 002b:00007f9297ebf168 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f92971abf80 RCX: 00007f929708c389
RDX: 0000000020000080 RSI: 00000000200000c0 RDI: 0000000000000000
RBP: 00007f92970d7493 R08: 00000000200002c0 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffc92277bff R14: 00007f9297ebf300 R15: 0000000000022000
 </TASK>
INFO: task syz-executor.3:18960 blocked for more than 143 seconds.
      Not tainted 6.4.0-rc7-syzkaller-00014-g692b7dc87ca6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.3  state:D stack:26848 pid:18960 ppid:5034   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5343 [inline]
 __schedule+0x187b/0x4900 kernel/sched/core.c:6669
 schedule+0xc3/0x180 kernel/sched/core.c:6745
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6804
 rwsem_down_write_slowpath+0xedd/0x13a0 kernel/locking/rwsem.c:1178
 __down_write_common+0x1aa/0x200 kernel/locking/rwsem.c:1306
 inode_lock_nested include/linux/fs.h:810 [inline]
 ext4_rename fs/ext4/namei.c:3842 [inline]
 ext4_rename2+0x106b/0x4400 fs/ext4/namei.c:4221
 vfs_rename+0xb1b/0xfa0 fs/namei.c:4849
 do_renameat2+0xd78/0x1660 fs/namei.c:5002
 __do_sys_rename fs/namei.c:5048 [inline]
 __se_sys_rename fs/namei.c:5046 [inline]
 __x64_sys_rename+0x86/0x90 fs/namei.c:5046
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f929708c389
RSP: 002b:00007f9297e9e168 EFLAGS: 00000246 ORIG_RAX: 0000000000000052
RAX: ffffffffffffffda RBX: 00007f92971ac050 RCX: 00007f929708c389
RDX: 0000000000000000 RSI: 0000000020000040 RDI: 0000000020000000
RBP: 00007f92970d7493 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffc92277bff R14: 00007f9297e9e300 R15: 0000000000022000
 </TASK>
INFO: task syz-executor.3:18961 blocked for more than 144 seconds.
      Not tainted 6.4.0-rc7-syzkaller-00014-g692b7dc87ca6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.3  state:D stack:26912 pid:18961 ppid:5034   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5343 [inline]
 __schedule+0x187b/0x4900 kernel/sched/core.c:6669
 schedule+0xc3/0x180 kernel/sched/core.c:6745
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6804
 rwsem_down_write_slowpath+0xedd/0x13a0 kernel/locking/rwsem.c:1178
 __down_write_common+0x1aa/0x200 kernel/locking/rwsem.c:1306
 inode_lock_nested include/linux/fs.h:810 [inline]
 filename_create+0x260/0x530 fs/namei.c:3884
 do_mkdirat+0xb7/0x520 fs/namei.c:4130
 __do_sys_mkdirat fs/namei.c:4153 [inline]
 __se_sys_mkdirat fs/namei.c:4151 [inline]
 __x64_sys_mkdirat+0x89/0xa0 fs/namei.c:4151
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f929708b297
RSP: 002b:00007f9297e7cf88 EFLAGS: 00000246 ORIG_RAX: 0000000000000102
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f929708b297
RDX: 00000000000001ff RSI: 00000000200007c0 RDI: 00000000ffffff9c
RBP: 0000000020000000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00000000200007c0 R14: 00007f9297e7cfe0 R15: 0000000000000000
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/13:
 #0: ffffffff8cf276f0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:518
1 lock held by rcu_tasks_trace/14:
 #0: ffffffff8cf27ab0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:518
1 lock held by khungtaskd/28:
 #0: ffffffff8cf27520 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by getty/4750:
 #0: ffff88802881b098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc900015a02f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6ab/0x1db0 drivers/tty/n_tty.c:2176
2 locks held by syz-fuzzer/5015:
 #0: ffff88802c8205e8 (&f->f_pos_lock){+.+.}-{3:3}, at: __fdget_pos+0x254/0x2f0 fs/file.c:1047
 #1: ffff88808265de00 (&type->i_mutex_dir_key#3){++++}-{3:3}, at: iterate_dir+0x10e/0x570 fs/readdir.c:55
4 locks held by syz-executor.3/18957:
 #0: ffff8880393ec0e0 (&type->s_umount_key#58/1){+.+.}-{3:3}, at: alloc_super+0x217/0x930 fs/super.c:228
 #1: ffff88814c5a6748 (&type->s_vfs_rename_key#4){+.+.}-{3:3}, at: lock_rename+0x52/0xa0 fs/namei.c:3046
 #2: ffff88803c904000 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: lock_two_directories+0xc3/0x120
 #3: ffff88808180de00 (&type->i_mutex_dir_key#3/5){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:810 [inline]
 #3: ffff88808180de00 (&type->i_mutex_dir_key#3/5){+.+.}-{3:3}, at: lock_two_directories+0x116/0x120 fs/namei.c:3032
4 locks held by syz-executor.3/18960:
 #0: ffff88814c5a6460 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:394
 #1: ffff88808265de00 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:810 [inline]
 #1: ffff88808265de00 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: lock_rename fs/namei.c:3042 [inline]
 #1: ffff88808265de00 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: do_renameat2+0x615/0x1660 fs/namei.c:4941
 #2: ffff88808180de00 (&type->i_mutex_dir_key#3){++++}-{3:3}, at: inode_lock include/linux/fs.h:775 [inline]
 #2: ffff88808180de00 (&type->i_mutex_dir_key#3){++++}-{3:3}, at: vfs_rename+0x617/0xfa0 fs/namei.c:4821
 #3: ffff88803c904000 (&type->i_mutex_dir_key#3/4){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:810 [inline]
 #3: ffff88803c904000 (&type->i_mutex_dir_key#3/4){+.+.}-{3:3}, at: ext4_rename fs/ext4/namei.c:3842 [inline]
 #3: ffff88803c904000 (&type->i_mutex_dir_key#3/4){+.+.}-{3:3}, at: ext4_rename2+0x106b/0x4400 fs/ext4/namei.c:4221
2 locks held by syz-executor.3/18961:
 #0: ffff88814c5a6460 (sb_writers#4){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:394
 #1: ffff88808265de00 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:810 [inline]
 #1: ffff88808265de00 (&type->i_mutex_dir_key#3/1){+.+.}-{3:3}, at: filename_create+0x260/0x530 fs/namei.c:3884

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted 6.4.0-rc7-syzkaller-00014-g692b7dc87ca6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x498/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x187/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
 watchdog+0xec2/0xf00 kernel/hung_task.c:379
 kthread+0x2b8/0x350 kernel/kthread.c:379
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 6493 Comm: kworker/u4:8 Not tainted 6.4.0-rc7-syzkaller-00014-g692b7dc87ca6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/27/2023
Workqueue: bat_events batadv_nc_worker
RIP: 0010:hlock_class kernel/locking/lockdep.c:228 [inline]
RIP: 0010:check_wait_context kernel/locking/lockdep.c:4751 [inline]
RIP: 0010:__lock_acquire+0x4ba/0x2070 kernel/locking/lockdep.c:5038
Code: 13 00 00 44 89 33 44 89 e3 48 89 d8 48 c1 e8 06 48 8d 3c c5 60 b2 32 90 be 08 00 00 00 e8 be 3a 78 00 48 0f a3 1d 46 93 c7 0e <73> 20 48 8d 04 5b 48 c1 e0 06 48 8d 98 60 11 02 90 48 ba 00 00 00
RSP: 0018:ffffc90016ad78e8 EFLAGS: 00000057
RAX: 0000000000000001 RBX: 00000000000006dc RCX: ffffffff816b1f12
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff9032b338
RBP: 000000000000000a R08: dffffc0000000000 R09: fffffbfff2065668
R10: 0000000000000000 R11: dffffc0000000001 R12: 00000000000006dc
R13: 1ffff11004009155 R14: 0000000000000000 R15: ffff888020048b38
FS:  0000000000000000(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000561b90f64258 CR3: 000000000cd30000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 lock_acquire+0x1e3/0x520 kernel/locking/lockdep.c:5705
 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:126 [inline]
 _raw_spin_lock_bh+0x35/0x50 kernel/locking/spinlock.c:178
 spin_lock_bh include/linux/spinlock.h:355 [inline]
 batadv_nc_purge_paths+0xe8/0x3a0 net/batman-adv/network-coding.c:442
 batadv_nc_worker+0x2d3/0x5c0 net/batman-adv/network-coding.c:720
 process_one_work+0x8a0/0x10e0 kernel/workqueue.c:2405
 worker_thread+0xa63/0x1210 kernel/workqueue.c:2552
 kthread+0x2b8/0x350 kernel/kthread.c:379
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>

Crashes (7):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/06/20 16:52 upstream 692b7dc87ca6 09ffe269 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_directories
2023/06/12 15:46 upstream 858fd168a95c aaed0183 .config console log report info ci2-upstream-fs INFO: task hung in lock_two_directories
2023/05/25 04:01 upstream 9d646009f65d 4bce1a3e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_directories
2023/05/16 18:39 upstream f1fcbaa18b28 11c89444 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in lock_two_directories
2023/05/14 05:20 upstream d4d58949a6ea 2b9ba477 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in lock_two_directories
2023/05/07 02:31 upstream fc4354c6e5c2 90c93c40 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in lock_two_directories
2023/05/18 02:15 linux-next 715abedee4cd 3bb7af1d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in lock_two_directories
* Struck through repros no longer work on HEAD.