ceph: No mds server is up or the cluster is laggy INFO: task syz-executor.1:11679 blocked for more than 140 seconds. Not tainted 4.19.211-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. syz-executor.1 D28416 11679 8139 0x00000004 Call Trace: context_switch kernel/sched/core.c:2828 [inline] __schedule+0x887/0x2040 kernel/sched/core.c:3517 schedule+0x8d/0x1b0 kernel/sched/core.c:3561 schedule_timeout+0x92d/0xfe0 kernel/time/timer.c:1794 do_wait_for_common kernel/sched/completion.c:83 [inline] __wait_for_common kernel/sched/completion.c:104 [inline] wait_for_common+0x29c/0x470 kernel/sched/completion.c:115 __flush_work+0x4bb/0x8b0 kernel/workqueue.c:2926 __cancel_work_timer+0x412/0x590 kernel/workqueue.c:3013 p9_conn_destroy net/9p/trans_fd.c:904 [inline] p9_fd_close+0x305/0x520 net/9p/trans_fd.c:934 p9_client_create+0x901/0x12e0 net/9p/client.c:1084 v9fs_session_init+0x1dd/0x1770 fs/9p/v9fs.c:421 v9fs_mount+0x73/0x910 fs/9p/vfs_super.c:135 mount_fs+0xa3/0x310 fs/super.c:1261 ceph: No mds server is up or the cluster is laggy vfs_kern_mount.part.0+0x68/0x470 fs/namespace.c:961 vfs_kern_mount fs/namespace.c:951 [inline] do_new_mount fs/namespace.c:2492 [inline] do_mount+0x115c/0x2f50 fs/namespace.c:2822 ksys_mount+0xcf/0x130 fs/namespace.c:3038 __do_sys_mount fs/namespace.c:3052 [inline] __se_sys_mount fs/namespace.c:3049 [inline] __x64_sys_mount+0xba/0x150 fs/namespace.c:3049 do_syscall_64+0xf9/0x620 arch/x86/entry/common.c:293 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x7efd547090d9 Code: Bad RIP value. RSP: 002b:00007efd52c18168 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5 RAX: ffffffffffffffda RBX: 00007efd548291f0 RCX: 00007efd547090d9 RDX: 0000000020000100 RSI: 0000000020000080 RDI: 0000000000000000 RBP: 00007efd54764ae9 R08: 0000000020000180 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007ffc4419c0af R14: 00007efd52c18300 R15: 0000000000022000 Showing all locks held in the system: 3 locks held by kworker/u4:1/23: #0: 000000005f63cd2e ((wq_completion)"%s""netns"){+.+.}, at: process_one_work+0x767/0x1570 kernel/workqueue.c:2124 #1: 00000000ed28d77a (net_cleanup_work){+.+.}, at: process_one_work+0x79c/0x1570 kernel/workqueue.c:2128 #2: 0000000034d101e0 (pernet_ops_rwsem){++++}, at: cleanup_net+0xa8/0x8b0 net/core/net_namespace.c:521 2 locks held by kworker/1:1/33: #0: 000000003041a18c ((wq_completion)"events"){+.+.}, at: process_one_work+0x767/0x1570 kernel/workqueue.c:2124 #1: 00000000571196c5 ((work_completion)(&m->wq)){+.+.}, at: process_one_work+0x79c/0x1570 kernel/workqueue.c:2128 1 lock held by khungtaskd/1570: #0: 000000004944ccad (rcu_read_lock){....}, at: debug_show_all_locks+0x53/0x265 kernel/locking/lockdep.c:4441 3 locks held by kworker/u4:4/3568: #0: 00000000eedefa3f (&rq->lock){-.-.}, at: idle_balance kernel/sched/fair.c:9702 [inline] #0: 00000000eedefa3f (&rq->lock){-.-.}, at: pick_next_task_fair+0x556/0x1570 kernel/sched/fair.c:6841 #1: 000000004944ccad (rcu_read_lock){....}, at: cpu_of kernel/sched/sched.h:923 [inline] #1: 000000004944ccad (rcu_read_lock){....}, at: __update_idle_core+0x39/0x3e0 kernel/sched/fair.c:6057 #2: 000000005b9f6118 (&base->lock){-.-.}, at: lock_timer_base+0x55/0x1b0 kernel/time/timer.c:950 1 lock held by in:imklog/7811: #0: 000000002ba29f3b (&f->f_pos_lock){+.+.}, at: __fdget_pos+0x26f/0x310 fs/file.c:767 2 locks held by kworker/1:4/9332: #0: 0000000077f8a78e ((wq_completion)"rcu_gp"){+.+.}, at: process_one_work+0x767/0x1570 kernel/workqueue.c:2124 #1: 000000001a7bd093 ((work_completion)(&rew.rew_work)){+.+.}, at: process_one_work+0x79c/0x1570 kernel/workqueue.c:2128 1 lock held by syz-executor.1/17564: #0: 00000000197e61f3 (rcu_preempt_state.exp_mutex){+.+.}, at: exp_funnel_lock kernel/rcu/tree_exp.h:297 [inline] #0: 00000000197e61f3 (rcu_preempt_state.exp_mutex){+.+.}, at: _synchronize_rcu_expedited+0x4dc/0x6f0 kernel/rcu/tree_exp.h:667 ============================================= NMI backtrace for cpu 0 CPU: 0 PID: 1570 Comm: khungtaskd Not tainted 4.19.211-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x1fc/0x2ef lib/dump_stack.c:118 nmi_cpu_backtrace.cold+0x63/0xa2 lib/nmi_backtrace.c:101 nmi_trigger_cpumask_backtrace+0x1a6/0x1f0 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:146 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:203 [inline] watchdog+0x991/0xe60 kernel/hung_task.c:287 kthread+0x33f/0x460 kernel/kthread.c:259 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:415 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 CPU: 1 PID: 3568 Comm: kworker/u4:4 Not tainted 4.19.211-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022 Workqueue: bat_events batadv_nc_worker RIP: 0010:__read_once_size include/linux/compiler.h:263 [inline] RIP: 0010:batadv_nc_purge_orig_hash net/batman-adv/network-coding.c:420 [inline] RIP: 0010:batadv_nc_worker+0x160/0xd50 net/batman-adv/network-coding.c:730 Code: 89 c6 e8 73 9b 89 f9 85 ed 58 74 1e e8 f9 99 89 f9 0f b6 2d a9 be 35 03 31 ff 89 ee e8 19 9b 89 f9 40 84 ed 0f 84 85 07 00 00 db 99 89 f9 48 89 d8 48 c1 e8 03 42 80 3c 28 00 0f 85 9e 0b 00 RSP: 0018:ffff8880a8f37cb0 EFLAGS: 00000202 RAX: 0000000000000000 RBX: ffff8880a8e6e2a8 RCX: ffffffff87d8f32a RDX: 0000000000000001 RSI: ffff8880a881e140 RDI: 0000000000000001 RBP: 0000000000000001 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000001 R11: 0000000000000000 R12: 0000000000000000 R13: dffffc0000000000 R14: 0000000000000175 R15: ffff8880a4655400 FS: 0000000000000000(0000) GS:ffff8880ba100000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000555556f7e708 CR3: 000000009c33a000 CR4: 00000000003406e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: process_one_work+0x864/0x1570 kernel/workqueue.c:2153 worker_thread+0x64c/0x1130 kernel/workqueue.c:2296 kthread+0x33f/0x460 kernel/kthread.c:259 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:415 ceph: No mds server is up or the cluster is laggy