INFO: task syz.2.1016:7048 blocked for more than 146 seconds. Not tainted 6.1.112-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.2.1016 state:D stack:26144 pid:7048 ppid:5313 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:5241 [inline] __schedule+0x143f/0x4570 kernel/sched/core.c:6558 schedule+0xbf/0x180 kernel/sched/core.c:6634 schedule_preempt_disabled+0xf/0x20 kernel/sched/core.c:6693 rwsem_down_read_slowpath kernel/locking/rwsem.c:1094 [inline] __down_read_common kernel/locking/rwsem.c:1261 [inline] __down_read kernel/locking/rwsem.c:1274 [inline] down_read+0x6ff/0xa30 kernel/locking/rwsem.c:1522 iterate_supers+0xac/0x1e0 fs/super.c:755 quota_sync_all fs/quota/quota.c:69 [inline] __do_sys_quotactl fs/quota/quota.c:937 [inline] __se_sys_quotactl+0x347/0x770 fs/quota/quota.c:916 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x3b/0xb0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x68/0xd2 RIP: 0033:0x7f1084b7dff9 RSP: 002b:00007f1085917038 EFLAGS: 00000246 ORIG_RAX: 00000000000000b3 RAX: ffffffffffffffda RBX: 00007f1084d35f80 RCX: 00007f1084b7dff9 RDX: 0000000000000000 RSI: 0000000000000000 RDI: ffffffff80000102 RBP: 00007f1084bf0296 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 0000000000000000 R14: 00007f1084d35f80 R15: 00007ffeeba9bb38 Showing all locks held in the system: 1 lock held by rcu_tasks_kthre/12: #0: ffffffff8d32b1d0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:517 1 lock held by rcu_tasks_trace/13: #0: ffffffff8d32b9d0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xe30 kernel/rcu/tasks.h:517 1 lock held by khungtaskd/28: #0: ffffffff8d32b000 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline] #0: ffffffff8d32b000 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline] #0: ffffffff8d32b000 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6494 2 locks held by getty/3396: #0: ffff88814ae2e098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244 #1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2198 2 locks held by kworker/0:3/3629: #0: ffff888017c72138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 #1: ffffc90003a8fd20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 5 locks held by kworker/u4:7/3725: #0: ffff888017e1e938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 #1: ffffc900044afd20 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7a9/0x11d0 kernel/workqueue.c:2267 #2: ffffffff8e4ee490 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf1/0xb60 net/core/net_namespace.c:566 #3: ffffffff8e4fa7e8 (rtnl_mutex){+.+.}-{3:3}, at: ip_tunnel_delete_nets+0xc9/0x330 net/ipv4/ip_tunnel.c:1148 #4: ffffffff8d3305f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline] #4: ffffffff8d3305f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x4f0/0x930 kernel/rcu/tree_exp.h:962 2 locks held by kworker/u4:9/3837: 1 lock held by syz.3.500/5396: #0: ffff88805a84c0e0 (&type->s_umount_key#51/1){+.+.}-{3:3}, at: alloc_super+0x217/0x930 fs/super.c:228 1 lock held by syz.2.1016/7048: #0: ffff88805a84c0e0 (&type->s_umount_key#59){++++}-{3:3}, at: iterate_supers+0xac/0x1e0 fs/super.c:755 1 lock held by syz.1.1927/9731: #0: ffffffff8d3304c0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x48/0x5f0 kernel/rcu/tree.c:4019 2 locks held by syz.0.1926/9735: #0: ffffffff8e4e0cc8 (br_ioctl_mutex){+.+.}-{3:3}, at: br_ioctl_call net/socket.c:1176 [inline] #0: ffffffff8e4e0cc8 (br_ioctl_mutex){+.+.}-{3:3}, at: sock_ioctl+0x26f/0x770 net/socket.c:1275 #1: ffffffff8e4fa7e8 (rtnl_mutex){+.+.}-{3:3}, at: br_ioctl_stub+0x9f/0xaa0 net/bridge/br_ioctl.c:402 1 lock held by syz.0.1926/9739: #0: ffffffff8e4fa7e8 (rtnl_mutex){+.+.}-{3:3}, at: packet_mc_add+0x28/0x930 net/packet/af_packet.c:3744 1 lock held by syz.0.1926/9740: #0: ffffffff8e4e0cc8 (br_ioctl_mutex){+.+.}-{3:3}, at: br_ioctl_call net/socket.c:1176 [inline] #0: ffffffff8e4e0cc8 (br_ioctl_mutex){+.+.}-{3:3}, at: sock_ioctl+0x26f/0x770 net/socket.c:1275 ============================================= NMI backtrace for cpu 0 CPU: 0 PID: 28 Comm: khungtaskd Not tainted 6.1.112-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106 nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111 nmi_trigger_cpumask_backtrace+0x1ae/0x3f0 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline] watchdog+0xf88/0xfd0 kernel/hung_task.c:377 kthread+0x28d/0x320 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline] NMI backtrace for cpu 1 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline] NMI backtrace for cpu 1 skipped: idling at acpi_safe_halt drivers/acpi/processor_idle.c:111 [inline] NMI backtrace for cpu 1 skipped: idling at acpi_idle_do_entry+0x10f/0x340 drivers/acpi/processor_idle.c:567