syzbot


INFO: task hung in ceph_mdsc_pre_umount

Status: auto-obsoleted due to no activity on 2024/04/28 22:11
Subsystems: ceph fs
[Documentation on labels]
Reported-by: syzbot+4bbc13a207327f82b3b0@syzkaller.appspotmail.com
First crash: 324d, last: 323d
Cause bisection: introduced by (bisect log) [merge commit]:
commit 61ff834658e52fe4d994fe018eb79d57efb140b1
Author: Stephen Rothwell <sfr@canb.auug.org.au>
Date: Fri Feb 2 01:43:13 2024 +0000

  Merge branch 'for-next' of git://git.kernel.org/pub/scm/linux/kernel/git/pdx86/platform-drivers-x86.git

Crash: INFO: task hung in bond_destructor (log)
Repro: C syz .config
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [ceph?] [fs?] INFO: task hung in ceph_mdsc_pre_umount 2 (5) 2024/02/07 11:55
Last patch testing requests (8)
Created Duration User Patch Repo Result
2024/04/28 20:50 24m retest repro linux-next OK log
2024/04/28 20:50 32m retest repro linux-next OK log
2024/04/28 20:50 24m retest repro linux-next OK log
2024/02/18 05:02 25m retest repro linux-next error
2024/02/18 05:02 34m retest repro linux-next error
2024/02/18 05:02 22m retest repro linux-next error
2024/02/07 10:58 0m hdanton@sina.com patch https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git 076d56d74f17 error
2024/02/06 04:53 2h03m hdanton@sina.com patch linux-next error

Sample crash report:
INFO: task syz-executor268:5081 blocked for more than 143 seconds.
      Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor268 state:D stack:26296 pid:5081  tgid:5081  ppid:5070   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5400 [inline]
 __schedule+0x17df/0x4a40 kernel/sched/core.c:6727
 __schedule_loop kernel/sched/core.c:6804 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6819
 schedule_timeout+0xb0/0x310 kernel/time/timer.c:2159
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common kernel/sched/completion.c:116 [inline]
 wait_for_common kernel/sched/completion.c:127 [inline]
 wait_for_completion+0x355/0x620 kernel/sched/completion.c:148
 __flush_workqueue+0x730/0x1630 kernel/workqueue.c:3617
 ceph_mdsc_pre_umount+0x5b5/0x8b0 fs/ceph/mds_client.c:5475
 ceph_kill_sb+0x9f/0x4b0 fs/ceph/super.c:1535
 deactivate_locked_super+0xc4/0x130 fs/super.c:477
 ceph_get_tree+0x9a9/0x17b0 fs/ceph/super.c:1361
 vfs_get_tree+0x90/0x2a0 fs/super.c:1784
 vfs_cmd_create+0xe4/0x230 fs/fsopen.c:230
 __do_sys_fsconfig fs/fsopen.c:476 [inline]
 __se_sys_fsconfig+0x967/0xec0 fs/fsopen.c:349
 do_syscall_64+0xfb/0x240
 entry_SYSCALL_64_after_hwframe+0x6d/0x75
RIP: 0033:0x7f1723f3ba39
RSP: 002b:00007ffde9dc2ba8 EFLAGS: 00000246 ORIG_RAX: 00000000000001af
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007f1723f3ba39
RDX: 0000000000000000 RSI: 0000000000000006 RDI: 0000000000000003
RBP: 00000000000143e0 R08: 0000000000000000 R09: 0000000000000006
R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffde9dc2bbc
R13: 431bde82d7b634db R14: 0000000000000001 R15: 0000000000000001
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/29:
 #0: ffffffff8e130d60 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:298 [inline]
 #0: ffffffff8e130d60 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:750 [inline]
 #0: ffffffff8e130d60 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
2 locks held by getty/4823:
 #0: ffff88802b08d0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc900031432f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2201
2 locks held by syz-executor268/5081:
 #0: ffff888022c74c70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __do_sys_fsconfig fs/fsopen.c:474 [inline]
 #0: ffff888022c74c70 (&fc->uapi_mutex){+.+.}-{3:3}, at: __se_sys_fsconfig+0x8e6/0xec0 fs/fsopen.c:349
 #1: ffff888022e380e0 (&type->s_umount_key#41/1){+.+.}-{3:3}, at: alloc_super+0x20e/0x8f0 fs/super.c:345

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 29 Comm: khungtaskd Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e7/0x2e0 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
 watchdog+0xfb0/0xff0 kernel/hung_task.c:379
 kthread+0x2f0/0x390 kernel/kthread.c:388
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:242
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 11 Comm: kworker/u4:1 Not tainted 6.8.0-rc2-next-20240202-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024
Workqueue: events_unbound toggle_allocation_gate
RIP: 0010:__text_poke+0x34b/0xd30
Code: e8 3a 08 c2 00 4c 8b b4 24 40 01 00 00 fa bb 00 02 00 00 be 00 02 00 00 4c 21 f6 31 ff e8 6d 18 5f 00 4c 21 f3 48 89 5c 24 50 <4c> 89 7c 24 38 75 07 e8 79 13 5f 00 eb 0a e8 72 13 5f 00 e8 fd 42
RSP: 0018:ffffc90000107780 EFLAGS: 00000006
RAX: 0000000000000000 RBX: 0000000000000200 RCX: ffff888016eabc00
RDX: 0000000000000000 RSI: 0000000000000200 RDI: 0000000000000000
RBP: ffffc90000107950 R08: ffffffff8134bcf3 R09: fffff52000020ec0
R10: dffffc0000000000 R11: fffff52000020ec0 R12: ffffea000007ad00
R13: fffffffffffffeff R14: 0000000000000246 R15: ffffffff81eb4896
FS:  0000000000000000(0000) GS:ffff8880b9500000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055a9d88ed600 CR3: 000000000df32000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 text_poke arch/x86/kernel/alternative.c:1985 [inline]
 text_poke_bp_batch+0x59c/0xb30 arch/x86/kernel/alternative.c:2318
 text_poke_flush arch/x86/kernel/alternative.c:2487 [inline]
 text_poke_finish+0x30/0x50 arch/x86/kernel/alternative.c:2494
 arch_jump_label_transform_apply+0x1c/0x30 arch/x86/kernel/jump_label.c:146
 static_key_disable_cpuslocked+0xce/0x1c0 kernel/jump_label.c:235
 static_key_disable+0x1a/0x20 kernel/jump_label.c:243
 toggle_allocation_gate+0x1b8/0x250 mm/kfence/core.c:831
 process_one_work kernel/workqueue.c:3049 [inline]
 process_scheduled_works+0x913/0x14f0 kernel/workqueue.c:3125
 worker_thread+0xa60/0x1000 kernel/workqueue.c:3206
 kthread+0x2f0/0x390 kernel/kthread.c:388
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:242
 </TASK>
INFO: NMI handler (nmi_cpu_backtrace_handler) took too long to run: 1.413 msecs

Crashes (5):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/02/03 22:18 linux-next 076d56d74f17 a67b2c42 .config strace log report syz C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_mdsc_pre_umount
2024/02/03 08:03 linux-next 076d56d74f17 60bf9982 .config strace log report syz C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_mdsc_pre_umount
2024/02/02 23:40 linux-next 076d56d74f17 60bf9982 .config strace log report syz C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_mdsc_pre_umount
2024/02/02 09:16 linux-next 076d56d74f17 d61103fc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_mdsc_pre_umount
2024/02/02 07:27 linux-next 076d56d74f17 d61103fc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in ceph_mdsc_pre_umount
* Struck through repros no longer work on HEAD.