syzbot


INFO: task hung in ceph_monc_stop

Status: auto-obsoleted due to no activity on 2024/09/20 08:04
Subsystems: ceph net
[Documentation on labels]
Reported-by: syzbot+388fe6c0b08b54d6d8f9@syzkaller.appspotmail.com
First crash: 328d, last: 325d
Cause bisection: failed (error log, bisect log)
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ceph?] [net?] INFO: task hung in ceph_monc_stop 0 (1) 2024/02/07 02:53
Last patch testing requests (4)
Created Duration User Patch Repo Result
2024/09/20 05:56 2h07m retest repro linux-next OK log
2024/07/12 05:43 7m retest repro linux-next error
2024/04/29 13:13 37m retest repro linux-next error
2024/02/19 07:09 22m retest repro linux-next error

Sample crash report:
INFO: task syz-executor.2:9262 blocked for more than 143 seconds.
      Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.2  state:D stack:27024 pid:9262  tgid:9261  ppid:5098   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5400 [inline]
 __schedule+0x17df/0x4a40 kernel/sched/core.c:6727
 __schedule_loop kernel/sched/core.c:6804 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6819
 schedule_timeout+0xb0/0x310 kernel/time/timer.c:2159
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common kernel/sched/completion.c:116 [inline]
 wait_for_common kernel/sched/completion.c:127 [inline]
 wait_for_completion+0x355/0x620 kernel/sched/completion.c:148
 __flush_workqueue+0x730/0x1630 kernel/workqueue.c:3617
 ceph_monc_stop+0x7c/0x1e0 net/ceph/mon_client.c:1248
 ceph_destroy_client+0x74/0x130 net/ceph/ceph_common.c:768
 destroy_fs_client+0x192/0x270 fs/ceph/super.c:899
 deactivate_locked_super+0xc4/0x130 fs/super.c:477
 ceph_get_tree+0x9a9/0x17b0 fs/ceph/super.c:1361
 vfs_get_tree+0x90/0x2a0 fs/super.c:1784
 vfs_cmd_create+0xe4/0x230 fs/fsopen.c:230
 __do_sys_fsconfig fs/fsopen.c:476 [inline]
 __se_sys_fsconfig+0x967/0xec0 fs/fsopen.c:349
 do_syscall_64+0xfb/0x240
 entry_SYSCALL_64_after_hwframe+0x6d/0x75
RIP: 0033:0x7fd4f087dda9
RSP: 002b:00007fd4f164c0c8 EFLAGS: 00000246 ORIG_RAX: 00000000000001af
RAX: ffffffffffffffda RBX: 00007fd4f09abf80 RCX: 00007fd4f087dda9
RDX: 0000000000000000 RSI: 0000000000000006 RDI: 0000000000000003
RBP: 00007fd4f08ca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007fd4f09abf80 R15: 00007fff1186bbe8
 </TASK>
INFO: task syz-executor.3:9303 blocked for more than 143 seconds.
      Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.3  state:D stack:26096 pid:9303  tgid:9301  ppid:5096   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5400 [inline]
 __schedule+0x17df/0x4a40 kernel/sched/core.c:6727
 __schedule_loop kernel/sched/core.c:6804 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6819
 schedule_timeout+0xb0/0x310 kernel/time/timer.c:2159
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common kernel/sched/completion.c:116 [inline]
 wait_for_common kernel/sched/completion.c:127 [inline]
 wait_for_completion+0x355/0x620 kernel/sched/completion.c:148
 __flush_workqueue+0x730/0x1630 kernel/workqueue.c:3617
 ceph_monc_stop+0x7c/0x1e0 net/ceph/mon_client.c:1248
 ceph_destroy_client+0x74/0x130 net/ceph/ceph_common.c:768
 destroy_fs_client+0x192/0x270 fs/ceph/super.c:899
 deactivate_locked_super+0xc4/0x130 fs/super.c:477
 ceph_get_tree+0x9a9/0x17b0 fs/ceph/super.c:1361
 vfs_get_tree+0x90/0x2a0 fs/super.c:1784
 vfs_cmd_create+0xe4/0x230 fs/fsopen.c:230
 __do_sys_fsconfig fs/fsopen.c:476 [inline]
 __se_sys_fsconfig+0x967/0xec0 fs/fsopen.c:349
 do_syscall_64+0xfb/0x240
 entry_SYSCALL_64_after_hwframe+0x6d/0x75
RIP: 0033:0x7fd3e5e7dda9
RSP: 002b:00007fd3e6c660c8 EFLAGS: 00000246 ORIG_RAX: 00000000000001af
RAX: ffffffffffffffda RBX: 00007fd3e5fabf80 RCX: 00007fd3e5e7dda9
RDX: 0000000000000000 RSI: 0000000000000006 RDI: 0000000000000003
RBP: 00007fd3e5eca47a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007fd3e5fabf80 R15: 00007fffe47c4b48
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/29:
 #0: ffffffff8e130d60 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
 #0: ffffffff8e130d60 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
 #0: ffffffff8e130d60 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
2 locks held by kworker/u4:2/34:
1 lock held by klogd/4510:
2 locks held by getty/4817:
 #0: ffff88802a6b60a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc900031432f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2201
1 lock held by syz-executor.2/9262:
 #0: ffff8880238e9070 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
 #0: ffff8880238e9070 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
1 lock held by syz-executor.3/9303:
 #0: ffff88807db20870 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
 #0: ffff88807db20870 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
2 locks held by syz-executor.2/9961:
 #0: ffff88802dbc7870 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
 #0: ffff88802dbc7870 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
 #1: ffff888086ed20e0 (&type->s_umount_key#76/1){+.+.}-{3:3}, at: alloc_super+0x20e/0x8f0 fs/super.c:345
2 locks held by syz-executor.3/10025:
 #0: ffff88807f54bc70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
 #0: ffff88807f54bc70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
 #1: ffff8880909d40e0 (&type->s_umount_key#76/1){+.+.}-{3:3}, at: alloc_super+0x20e/0x8f0 fs/super.c:345
2 locks held by syz-executor.2/10618:
 #0: ffff8880821d7870 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
 #0: ffff8880821d7870 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
 #1: ffff88802e9ea0e0 (&type->s_umount_key#76/1){+.+.}-{3:3}, at: alloc_super+0x20e/0x8f0 fs/super.c:345
2 locks held by syz-executor.3/10697:
 #0: ffff88802949b870 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
 #0: ffff88802949b870 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
 #1: ffff88808ff400e0 (&type->s_umount_key#76/1){+.+.}-{3:3}, at: alloc_super+0x20e/0x8f0 fs/super.c:345
2 locks held by syz-executor.0/11014:
 #0: ffff888029e10c70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
 #0: ffff888029e10c70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
 #1: ffff888087a800e0 (&type->s_umount_key#76/1){+.+.}-{3:3}, at: alloc_super+0x20e/0x8f0 fs/super.c:345

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 29 Comm: khungtaskd Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
 watchdog+0xfb0/0xff0 kernel/hung_task.c:379
 kthread+0x2f0/0x390 kernel/kthread.c:388
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:242
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 34 Comm: kworker/u4:2 Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Workqueue: bat_events batadv_mcast_mla_update
RIP: 0010:rcu_is_watching+0x1c/0xb0 kernel/rcu/tree.c:700
Code: 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 41 57 41 56 53 65 ff 05 38 92 88 7e e8 4b bc f1 09 89 c3 83 f8 08 73 7a <49> bf 00 00 00 00 00 fc ff df 4c 8d 34 dd 40 99 ae 8d 4c 89 f0 48
RSP: 0018:ffffc90000aa7880 EFLAGS: 00000293
RAX: 0000000000000000 RBX: 0000000000000000 RCX: ffffffff81719740
RDX: 0000000000000000 RSI: ffffffff8bfe7a40 RDI: ffffffff8bfe7a00
RBP: ffffc90000aa79e0 R08: ffffffff8f85b72f R09: 1ffffffff1f0b6e5
R10: dffffc0000000000 R11: fffffbfff1f0b6e6 R12: 1ffff92000154f20
R13: ffffffff8b389108 R14: ffff888080191630 R15: dffffc0000000000
FS:  0000000000000000(0000) GS:ffff8880b9400000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f1f39b7b208 CR3: 000000000df32000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 trace_lock_release include/trace/events/lock.h:69 [inline]
 lock_release+0xbf/0x9d0 kernel/locking/lockdep.c:5765
 __raw_spin_unlock include/linux/spinlock_api_smp.h:141 [inline]
 _raw_spin_unlock+0x16/0x50 kernel/locking/spinlock.c:186
 spin_unlock include/linux/spinlock.h:391 [inline]
 __batadv_mcast_mla_update net/batman-adv/multicast.c:924 [inline]
 batadv_mcast_mla_update+0x3a18/0x4030 net/batman-adv/multicast.c:949
 process_one_work kernel/workqueue.c:3049 [inline]
 process_scheduled_works+0x913/0x14f0 kernel/workqueue.c:3125
 worker_thread+0xa60/0x1000 kernel/workqueue.c:3206
 kthread+0x2f0/0x390 kernel/kthread.c:388
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:242
 </TASK>

Crashes (32):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/02/05 02:03 linux-next 076d56d74f17 a67b2c42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/05 01:44 linux-next 076d56d74f17 a67b2c42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/05 01:34 linux-next 076d56d74f17 a67b2c42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/05 01:22 linux-next 076d56d74f17 a67b2c42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/04 20:54 linux-next 076d56d74f17 a67b2c42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/04 20:31 linux-next 076d56d74f17 a67b2c42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/04 13:13 linux-next 076d56d74f17 a67b2c42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/03 19:40 linux-next 076d56d74f17 a67b2c42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/03 19:06 linux-next 076d56d74f17 a67b2c42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/03 18:47 linux-next 076d56d74f17 a67b2c42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/03 18:25 linux-next 076d56d74f17 a67b2c42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/03 16:11 linux-next 076d56d74f17 a67b2c42 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/03 14:54 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/03 08:47 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/03 02:43 linux-next 076d56d74f17 60bf9982 .config console log report syz C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 17:19 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 15:14 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 14:16 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 14:16 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 13:49 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 13:11 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 12:27 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 12:01 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 11:32 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 10:49 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 10:46 linux-next 076d56d74f17 60bf9982 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 08:51 linux-next 076d56d74f17 d61103fc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 08:40 linux-next 076d56d74f17 d61103fc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 08:25 linux-next 076d56d74f17 d61103fc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 07:54 linux-next 076d56d74f17 d61103fc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 07:41 linux-next 076d56d74f17 d61103fc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
2024/02/02 07:02 linux-next 076d56d74f17 d61103fc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_monc_stop
* Struck through repros no longer work on HEAD.