syzbot


INFO: task hung in do_rmdir

Status: closed as invalid on 2018/03/27 11:14
Subsystems: fs
[Documentation on labels]
First crash: 2212d, last: 2212d
Similar bugs (6)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in do_rmdir (5) ext4 nilfs 5 459d 626d 0/26 auto-obsoleted due to no activity on 2023/04/25 17:21
upstream INFO: task hung in do_rmdir (3) fs 10 1140d 1215d 0/26 auto-closed as invalid on 2021/06/01 08:24
linux-4.19 INFO: task hung in do_rmdir 1 643d 643d 0/1 auto-obsoleted due to no activity on 2022/11/10 04:55
android-49 INFO: task hung in do_rmdir 1 2165d 2165d 0/3 auto-closed as invalid on 2019/02/22 12:59
upstream INFO: task hung in do_rmdir (4) fs 1 799d 799d 0/26 auto-closed as invalid on 2022/05/08 10:41
upstream INFO: task hung in do_rmdir (2) exfat 3 1923d 2058d 0/26 closed as dup on 2018/09/11 15:01

Sample crash report:
IPVS: sync thread started: state = BACKUP, mcast_ifn = sit0, syncid = 0, id = 0
IPVS: stopping backup sync thread 14746 ...
INFO: task syz-executor0:4470 blocked for more than 120 seconds.
      Not tainted 4.16.0-rc7+ #3
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
syz-executor0   D18936  4470      1 0x00000004
Call Trace:
 context_switch kernel/sched/core.c:2862 [inline]
 __schedule+0x8fb/0x1ec0 kernel/sched/core.c:3440
 schedule+0xf5/0x430 kernel/sched/core.c:3499
 __rwsem_down_write_failed_common+0x7c0/0x1540 kernel/locking/rwsem-xadd.c:566
 rwsem_down_write_failed+0xe/0x10 kernel/locking/rwsem-xadd.c:595
 call_rwsem_down_write_failed+0x17/0x30 arch/x86/lib/rwsem.S:117
 __down_write arch/x86/include/asm/rwsem.h:142 [inline]
 down_write_nested+0xa6/0x120 kernel/locking/rwsem.c:190
 inode_lock_nested include/linux/fs.h:748 [inline]
 do_rmdir+0x380/0x5f0 fs/namei.c:3907
 SYSC_rmdir fs/namei.c:3937 [inline]
 SyS_rmdir+0x1a/0x20 fs/namei.c:3935
 do_syscall_64+0x281/0x940 arch/x86/entry/common.c:287
 entry_SYSCALL_64_after_hwframe+0x42/0xb7
RIP: 0033:0x454607
RSP: 002b:00007ffffddc8828 EFLAGS: 00000202 ORIG_RAX: 0000000000000054
RAX: ffffffffffffffda RBX: 000000000000014c RCX: 0000000000454607
RDX: 0000000000000000 RSI: 00007ffffddc9500 RDI: 00007ffffddc9520
RBP: 00007ffffddc8ed0 R08: 0000000000000000 R09: 0000000000000001
R10: 000000000000000a R11: 0000000000000202 R12: 0000000000000481
R13: 0000000000000481 R14: 0000000000000006 R15: 000000000003be4e

Showing all locks held in the system:
3 locks held by kworker/1:1/23:
 #0:  ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f0553>] work_static include/linux/workqueue.h:198 [inline]
 #0:  ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f0553>] set_work_data kernel/workqueue.c:619 [inline]
 #0:  ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f0553>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
 #0:  ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f0553>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084
 #1:  ((work_completion)(&css->destroy_work)#2){+.+.}, at: [<00000000b831875e>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088
 #2:  (cgroup_mutex){+.+.}, at: [<00000000be516ac5>] css_release_work_fn+0xea/0x940 kernel/cgroup/cgroup.c:4592
2 locks held by khungtaskd/869:
 #0:  (rcu_read_lock){....}, at: [<0000000078a9839c>] check_hung_uninterruptible_tasks kernel/hung_task.c:175 [inline]
 #0:  (rcu_read_lock){....}, at: [<0000000078a9839c>] watchdog+0x1c5/0xd60 kernel/hung_task.c:249
 #1:  (tasklist_lock){.+.+}, at: [<000000005c607ab3>] debug_show_all_locks+0xd3/0x3d0 kernel/locking/lockdep.c:4470
2 locks held by rs:main Q:Reg/4308:
 #0:  (&f->f_pos_lock){+.+.}, at: [<000000004c1cbd9c>] __fdget_pos+0x12b/0x190 fs/file.c:765
 #1:  (sb_writers#4){.+.+}, at: [<00000000cf7d8e1f>] file_start_write include/linux/fs.h:2709 [inline]
 #1:  (sb_writers#4){.+.+}, at: [<00000000cf7d8e1f>] vfs_write+0x407/0x510 fs/read_write.c:543
1 lock held by rsyslogd/4310:
 #0:  (&f->f_pos_lock){+.+.}, at: [<000000004c1cbd9c>] __fdget_pos+0x12b/0x190 fs/file.c:765
2 locks held by getty/4400:
 #0:  (&tty->ldisc_sem){++++}, at: [<00000000391abe70>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000006f1f521a>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4401:
 #0:  (&tty->ldisc_sem){++++}, at: [<00000000391abe70>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000006f1f521a>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4402:
 #0:  (&tty->ldisc_sem){++++}, at: [<00000000391abe70>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000006f1f521a>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4403:
 #0:  (&tty->ldisc_sem){++++}, at: [<00000000391abe70>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000006f1f521a>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4404:
 #0:  (&tty->ldisc_sem){++++}, at: [<00000000391abe70>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000006f1f521a>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4405:
 #0:  (&tty->ldisc_sem){++++}, at: [<00000000391abe70>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000006f1f521a>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by getty/4406:
 #0:  (&tty->ldisc_sem){++++}, at: [<00000000391abe70>] ldsem_down_read+0x37/0x40 drivers/tty/tty_ldsem.c:365
 #1:  (&ldata->atomic_read_lock){+.+.}, at: [<000000006f1f521a>] n_tty_read+0x2ef/0x1a40 drivers/tty/n_tty.c:2131
2 locks held by syz-executor0/4470:
 #0:  (sb_writers#10){.+.+}, at: [<00000000c7260d6c>] sb_start_write include/linux/fs.h:1548 [inline]
 #0:  (sb_writers#10){.+.+}, at: [<00000000c7260d6c>] mnt_want_write+0x3f/0xb0 fs/namespace.c:386
 #1:  (&type->i_mutex_dir_key#3/1){+.+.}, at: [<000000000c31ff86>] inode_lock_nested include/linux/fs.h:748 [inline]
 #1:  (&type->i_mutex_dir_key#3/1){+.+.}, at: [<000000000c31ff86>] do_rmdir+0x380/0x5f0 fs/namei.c:3907
2 locks held by syz-executor3/4474:
 #0:  (sb_writers#10){.+.+}, at: [<00000000c7260d6c>] sb_start_write include/linux/fs.h:1548 [inline]
 #0:  (sb_writers#10){.+.+}, at: [<00000000c7260d6c>] mnt_want_write+0x3f/0xb0 fs/namespace.c:386
 #1:  (&type->i_mutex_dir_key#3/1){+.+.}, at: [<000000000c31ff86>] inode_lock_nested include/linux/fs.h:748 [inline]
 #1:  (&type->i_mutex_dir_key#3/1){+.+.}, at: [<000000000c31ff86>] do_rmdir+0x380/0x5f0 fs/namei.c:3907
4 locks held by syz-executor1/4476:
 #0:  (sb_writers#11){.+.+}, at: [<00000000c7260d6c>] sb_start_write include/linux/fs.h:1548 [inline]
 #0:  (sb_writers#11){.+.+}, at: [<00000000c7260d6c>] mnt_want_write+0x3f/0xb0 fs/namespace.c:386
 #1:  (&type->i_mutex_dir_key#4/1){+.+.}, at: [<0000000094e32b09>] inode_lock_nested include/linux/fs.h:748 [inline]
 #1:  (&type->i_mutex_dir_key#4/1){+.+.}, at: [<0000000094e32b09>] filename_create+0x192/0x520 fs/namei.c:3625
 #2:  (cgroup_mutex){+.+.}, at: [<00000000bfcee748>] cgroup_kn_lock_live+0x29e/0x580 kernel/cgroup/cgroup.c:1535
 #3:  (rtnl_mutex){+.+.}, at: [<00000000bcb66ee7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:74
2 locks held by syz-executor7/4478:
 #0:  (sb_writers#10){.+.+}, at: [<00000000c7260d6c>] sb_start_write include/linux/fs.h:1548 [inline]
 #0:  (sb_writers#10){.+.+}, at: [<00000000c7260d6c>] mnt_want_write+0x3f/0xb0 fs/namespace.c:386
 #1:  (&type->i_mutex_dir_key#3/1){+.+.}, at: [<000000000c31ff86>] inode_lock_nested include/linux/fs.h:748 [inline]
 #1:  (&type->i_mutex_dir_key#3/1){+.+.}, at: [<000000000c31ff86>] do_rmdir+0x380/0x5f0 fs/namei.c:3907
4 locks held by syz-executor2/4479:
 #0:  (sb_writers#10){.+.+}, at: [<00000000c7260d6c>] sb_start_write include/linux/fs.h:1548 [inline]
 #0:  (sb_writers#10){.+.+}, at: [<00000000c7260d6c>] mnt_want_write+0x3f/0xb0 fs/namespace.c:386
 #1:  (&type->i_mutex_dir_key#3/1){+.+.}, at: [<000000000c31ff86>] inode_lock_nested include/linux/fs.h:748 [inline]
 #1:  (&type->i_mutex_dir_key#3/1){+.+.}, at: [<000000000c31ff86>] do_rmdir+0x380/0x5f0 fs/namei.c:3907
 #2:  (&type->i_mutex_dir_key#3){++++}, at: [<000000005410392e>] inode_lock include/linux/fs.h:713 [inline]
 #2:  (&type->i_mutex_dir_key#3){++++}, at: [<000000005410392e>] vfs_rmdir+0xd6/0x410 fs/namei.c:3848
 #3:  (cgroup_mutex){+.+.}, at: [<00000000bfcee748>] cgroup_kn_lock_live+0x29e/0x580 kernel/cgroup/cgroup.c:1535
3 locks held by kworker/0:7/6425:
 #0:  ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f0553>] work_static include/linux/workqueue.h:198 [inline]
 #0:  ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f0553>] set_work_data kernel/workqueue.c:619 [inline]
 #0:  ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f0553>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
 #0:  ((wq_completion)"cgroup_destroy"){+.+.}, at: [<00000000773f0553>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084
 #1:  ((work_completion)(&css->destroy_work)#2){+.+.}, at: [<00000000b831875e>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088
 #2:  (cgroup_mutex){+.+.}, at: [<00000000be516ac5>] css_release_work_fn+0xea/0x940 kernel/cgroup/cgroup.c:4592
2 locks held by syz-executor6/11587:
 #0:  (sb_writers#10){.+.+}, at: [<00000000c7260d6c>] sb_start_write include/linux/fs.h:1548 [inline]
 #0:  (sb_writers#10){.+.+}, at: [<00000000c7260d6c>] mnt_want_write+0x3f/0xb0 fs/namespace.c:386
 #1:  (&type->i_mutex_dir_key#3/1){+.+.}, at: [<000000000c31ff86>] inode_lock_nested include/linux/fs.h:748 [inline]
 #1:  (&type->i_mutex_dir_key#3/1){+.+.}, at: [<000000000c31ff86>] do_rmdir+0x380/0x5f0 fs/namei.c:3907
3 locks held by kworker/1:10/12932:
 #0:  ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: [<00000000773f0553>] work_static include/linux/workqueue.h:198 [inline]
 #0:  ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: [<00000000773f0553>] set_work_data kernel/workqueue.c:619 [inline]
 #0:  ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: [<00000000773f0553>] set_work_pool_and_clear_pending kernel/workqueue.c:646 [inline]
 #0:  ((wq_completion)"%s"("ipv6_addrconf")){+.+.}, at: [<00000000773f0553>] process_one_work+0xb12/0x1bb0 kernel/workqueue.c:2084
 #1:  ((addr_chk_work).work){+.+.}, at: [<00000000b831875e>] process_one_work+0xb89/0x1bb0 kernel/workqueue.c:2088
 #2:  (rtnl_mutex){+.+.}, at: [<00000000bcb66ee7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:74
1 lock held by syz-executor4/14742:
 #0:  (ipvs->sync_mutex){+.+.}, at: [<0000000064c35885>] do_ip_vs_set_ctl+0x277/0x1cc0 net/netfilter/ipvs/ip_vs_ctl.c:2393
2 locks held by syz-executor4/14749:
 #0:  (rtnl_mutex){+.+.}, at: [<00000000bcb66ee7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:74
 #1:  (ipvs->sync_mutex){+.+.}, at: [<000000007431ff40>] do_ip_vs_set_ctl+0x10f8/0x1cc0 net/netfilter/ipvs/ip_vs_ctl.c:2388
1 lock held by syz-executor5/14752:
 #0:  (rtnl_mutex){+.+.}, at: [<0000000073a8432d>] rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0:  (rtnl_mutex){+.+.}, at: [<0000000073a8432d>] rtnetlink_rcv_msg+0x508/0xb10 net/core/rtnetlink.c:4632
1 lock held by ipvs-b:6:0/14746:
 #0:  (rtnl_mutex){+.+.}, at: [<00000000bcb66ee7>] rtnl_lock+0x17/0x20 net/core/rtnetlink.c:74

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 869 Comm: khungtaskd Not tainted 4.16.0-rc7+ #3
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011
Call Trace:
 __dump_stack lib/dump_stack.c:17 [inline]
 dump_stack+0x194/0x24d lib/dump_stack.c:53
 nmi_cpu_backtrace+0x1d2/0x210 lib/nmi_backtrace.c:103
 nmi_trigger_cpumask_backtrace+0x123/0x180 lib/nmi_backtrace.c:62
 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:38
 trigger_all_cpu_backtrace include/linux/nmi.h:138 [inline]
 check_hung_task kernel/hung_task.c:132 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:190 [inline]
 watchdog+0x90c/0xd60 kernel/hung_task.c:249
 kthread+0x33c/0x400 kernel/kthread.c:238
 ret_from_fork+0x3a/0x50 arch/x86/entry/entry_64.S:406
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0 skipped: idling at native_safe_halt+0x6/0x10 arch/x86/include/asm/irqflags.h:54

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2018/03/27 09:23 upstream 3eb2ce825ea1 0ca7878b .config console log report ci-upstream-kasan-gce-root
* Struck through repros no longer work on HEAD.