INFO: task syz.0.297:5358 blocked for more than 143 seconds. Not tainted 5.15.162-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.0.297 state:D stack: 0 pid: 5358 ppid: 3969 flags:0x00000001 Call trace: __switch_to+0x308/0x5e8 arch/arm64/kernel/process.c:518 context_switch kernel/sched/core.c:5030 [inline] __schedule+0xf10/0x1e48 kernel/sched/core.c:6376 schedule+0x11c/0x1c8 kernel/sched/core.c:6459 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6518 rwsem_down_read_slowpath+0x5b0/0x988 kernel/locking/rwsem.c:1055 __down_read_common kernel/locking/rwsem.c:1239 [inline] __down_read kernel/locking/rwsem.c:1252 [inline] down_read+0x10c/0x398 kernel/locking/rwsem.c:1500 inode_lock_shared include/linux/fs.h:799 [inline] lookup_slow+0x50/0x84 fs/namei.c:1679 walk_component+0x394/0x4cc fs/namei.c:1976 lookup_last fs/namei.c:2431 [inline] path_lookupat+0x13c/0x3d0 fs/namei.c:2455 filename_lookup+0x1c4/0x4c8 fs/namei.c:2484 user_path_at_empty+0x5c/0x1a4 fs/namei.c:2883 user_path_at include/linux/namei.h:57 [inline] do_mount fs/namespace.c:3345 [inline] __do_sys_mount fs/namespace.c:3556 [inline] __se_sys_mount fs/namespace.c:3533 [inline] __arm64_sys_mount+0x4dc/0x5e0 fs/namespace.c:3533 __invoke_syscall arch/arm64/kernel/syscall.c:38 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52 el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142 do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181 el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:608 el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:626 el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584 INFO: task syz.0.297:5362 blocked for more than 144 seconds. Not tainted 5.15.162-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.0.297 state:D stack: 0 pid: 5362 ppid: 3969 flags:0x00000001 Call trace: __switch_to+0x308/0x5e8 arch/arm64/kernel/process.c:518 context_switch kernel/sched/core.c:5030 [inline] __schedule+0xf10/0x1e48 kernel/sched/core.c:6376 schedule+0x11c/0x1c8 kernel/sched/core.c:6459 schedule_preempt_disabled+0x18/0x2c kernel/sched/core.c:6518 rwsem_down_read_slowpath+0x5b0/0x988 kernel/locking/rwsem.c:1055 __down_read_common kernel/locking/rwsem.c:1239 [inline] __down_read kernel/locking/rwsem.c:1252 [inline] down_read+0x10c/0x398 kernel/locking/rwsem.c:1500 inode_lock_shared include/linux/fs.h:799 [inline] lookup_slow+0x50/0x84 fs/namei.c:1679 walk_component+0x394/0x4cc fs/namei.c:1976 lookup_last fs/namei.c:2431 [inline] path_lookupat+0x13c/0x3d0 fs/namei.c:2455 filename_lookup+0x1c4/0x4c8 fs/namei.c:2484 user_path_at_empty+0x5c/0x1a4 fs/namei.c:2883 user_path_at include/linux/namei.h:57 [inline] do_mount fs/namespace.c:3345 [inline] __do_sys_mount fs/namespace.c:3556 [inline] __se_sys_mount fs/namespace.c:3533 [inline] __arm64_sys_mount+0x4dc/0x5e0 fs/namespace.c:3533 __invoke_syscall arch/arm64/kernel/syscall.c:38 [inline] invoke_syscall+0x98/0x2b8 arch/arm64/kernel/syscall.c:52 el0_svc_common+0x138/0x258 arch/arm64/kernel/syscall.c:142 do_el0_svc+0x58/0x14c arch/arm64/kernel/syscall.c:181 el0_svc+0x7c/0x1f0 arch/arm64/kernel/entry-common.c:608 el0t_64_sync_handler+0x84/0xe4 arch/arm64/kernel/entry-common.c:626 el0t_64_sync+0x1a0/0x1a4 arch/arm64/kernel/entry.S:584 Showing all locks held in the system: 2 locks held by kworker/1:0/21: #0: ffff0001b481d958 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested kernel/sched/core.c:475 [inline] #0: ffff0001b481d958 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock kernel/sched/sched.h:1326 [inline] #0: ffff0001b481d958 (&rq->__lock){-.-.}-{2:2}, at: rq_lock kernel/sched/sched.h:1621 [inline] #0: ffff0001b481d958 (&rq->__lock){-.-.}-{2:2}, at: __schedule+0x328/0x1e48 kernel/sched/core.c:6290 #1: ffff0001b480ac48 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x438/0x66c kernel/sched/psi.c:891 1 lock held by khungtaskd/27: #0: ffff800014b214e0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0xc/0x44 include/linux/rcupdate.h:311 4 locks held by kworker/u4:4/301: #0: ffff0000c038c138 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x66c/0x11b8 kernel/workqueue.c:2283 #1: ffff80001c937c00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x6ac/0x11b8 kernel/workqueue.c:2285 #2: ffff800016a3ba10 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0xf4/0x9bc net/core/net_namespace.c:561 #3: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:72 2 locks held by getty/3730: #0: ffff0000d3171098 (&tty->ldisc_sem){++++}-{0:0}, at: ldsem_down_read+0x40/0x50 drivers/tty/tty_ldsem.c:340 #1: ffff80001a32e2e8 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x414/0x1204 drivers/tty/n_tty.c:2158 3 locks held by kworker/1:8/4054: #0: ffff0000c0020938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x66c/0x11b8 kernel/workqueue.c:2283 #1: ffff80001d8a7c00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x6ac/0x11b8 kernel/workqueue.c:2285 #2: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:72 3 locks held by kworker/1:14/4337: #0: ffff0000c0020938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x66c/0x11b8 kernel/workqueue.c:2283 #1: ffff80001d1b7c00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x6ac/0x11b8 kernel/workqueue.c:2285 #2: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:72 3 locks held by syz.0.297/5353: 1 lock held by syz.0.297/5358: #0: ffff0000e0d2c188 (&type->i_mutex_dir_key#19){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:799 [inline] #0: ffff0000e0d2c188 (&type->i_mutex_dir_key#19){++++}-{3:3}, at: lookup_slow+0x50/0x84 fs/namei.c:1679 1 lock held by syz.0.297/5362: #0: ffff0000e0d2c188 (&type->i_mutex_dir_key#19){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:799 [inline] #0: ffff0000e0d2c188 (&type->i_mutex_dir_key#19){++++}-{3:3}, at: lookup_slow+0x50/0x84 fs/namei.c:1679 2 locks held by syz.4.825/7842: 1 lock held by syz-executor/7957: #0: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline] #0: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0xa2c/0xdac net/core/rtnetlink.c:5626 1 lock held by syz.0.858/8003: #0: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline] #0: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0xa2c/0xdac net/core/rtnetlink.c:5626 1 lock held by syz.0.858/8055: #0: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline] #0: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0xa2c/0xdac net/core/rtnetlink.c:5626 2 locks held by syz.1.861/8014: 1 lock held by syz.3.868/8035: #0: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline] #0: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0xa2c/0xdac net/core/rtnetlink.c:5626 3 locks held by syz.3.868/8036: #0: ffff800016aa32b0 (cb_lock){++++}-{3:3}, at: genl_rcv+0x28/0x50 net/netlink/genetlink.c:802 #1: ffff800016aa3168 (genl_mutex){+.+.}-{3:3}, at: genl_lock net/netlink/genetlink.c:33 [inline] #1: ffff800016aa3168 (genl_mutex){+.+.}-{3:3}, at: genl_rcv_msg+0x114/0x1018 net/netlink/genetlink.c:790 #2: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:72 1 lock held by syz.3.868/8037: #0: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:72 [inline] #0: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0xa2c/0xdac net/core/rtnetlink.c:5626 1 lock held by syz.2.874/8054: #0: ffff800016a471e8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock+0x20/0x2c net/core/rtnetlink.c:72 =============================================