INFO: task syz-executor.1:15948 blocked for more than 143 seconds. Not tainted 6.9.0-rc5-syzkaller-00159-gc942a0cd3603 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.1 state:D stack:24664 pid:15948 tgid:15948 ppid:14994 flags:0x00000006 Call Trace: context_switch kernel/sched/core.c:5409 [inline] __schedule+0x1796/0x4a00 kernel/sched/core.c:6746 __schedule_loop kernel/sched/core.c:6823 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6838 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6895 rwsem_down_read_slowpath kernel/locking/rwsem.c:1086 [inline] __down_read_common kernel/locking/rwsem.c:1250 [inline] __down_read kernel/locking/rwsem.c:1263 [inline] down_read+0x705/0xa40 kernel/locking/rwsem.c:1528 filemap_invalidate_lock_shared include/linux/fs.h:850 [inline] page_cache_ra_unbounded+0xfb/0x7a0 mm/readahead.c:225 do_sync_mmap_readahead+0x444/0x850 filemap_fault+0x7e5/0x16a0 mm/filemap.c:3289 __do_fault+0x135/0x460 mm/memory.c:4531 do_shared_fault mm/memory.c:4954 [inline] do_fault mm/memory.c:5028 [inline] do_pte_missing mm/memory.c:3880 [inline] handle_pte_fault mm/memory.c:5300 [inline] __handle_mm_fault+0x2361/0x7240 mm/memory.c:5441 handle_mm_fault+0x27f/0x770 mm/memory.c:5606 do_user_addr_fault arch/x86/mm/fault.c:1362 [inline] handle_page_fault arch/x86/mm/fault.c:1505 [inline] exc_page_fault+0x446/0x8e0 arch/x86/mm/fault.c:1563 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623 RIP: 0033:0x7f0e7f45ef2b RSP: 002b:00007ffe433d37c8 EFLAGS: 00010202 RAX: 0000000020000040 RBX: 0000000000000004 RCX: 0000000000737562 RDX: 0000000000000006 RSI: 0000000075622f2e RDI: 0000000020000040 RBP: 00007f0e7f5ad980 R08: 00007f0e7f400000 R09: 0000000000000001 R10: 0000000000000001 R11: 0000000000000009 R12: 00000000000bbb8d R13: 00000000000bbb5b R14: 00007ffe433d3970 R15: 00007f0e7f434cb0 INFO: task syz-executor.1:15949 blocked for more than 144 seconds. Not tainted 6.9.0-rc5-syzkaller-00159-gc942a0cd3603 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz-executor.1 state:D stack:24568 pid:15949 tgid:15948 ppid:14994 flags:0x00004006 Call Trace: context_switch kernel/sched/core.c:5409 [inline] __schedule+0x1796/0x4a00 kernel/sched/core.c:6746 __schedule_loop kernel/sched/core.c:6823 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6838 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6895 rwsem_down_read_slowpath kernel/locking/rwsem.c:1086 [inline] __down_read_common kernel/locking/rwsem.c:1250 [inline] __down_read kernel/locking/rwsem.c:1263 [inline] down_read+0x705/0xa40 kernel/locking/rwsem.c:1528 filemap_invalidate_lock_shared include/linux/fs.h:850 [inline] page_cache_ra_unbounded+0xfb/0x7a0 mm/readahead.c:225 do_sync_mmap_readahead+0x444/0x850 filemap_fault+0x7e5/0x16a0 mm/filemap.c:3289 __do_fault+0x135/0x460 mm/memory.c:4531 do_read_fault mm/memory.c:4894 [inline] do_fault mm/memory.c:5024 [inline] do_pte_missing mm/memory.c:3880 [inline] handle_pte_fault mm/memory.c:5300 [inline] __handle_mm_fault+0x45f7/0x7240 mm/memory.c:5441 handle_mm_fault+0x27f/0x770 mm/memory.c:5606 faultin_page mm/gup.c:958 [inline] __get_user_pages+0x727/0x1630 mm/gup.c:1257 populate_vma_page_range+0x2ae/0x390 mm/gup.c:1697 __mm_populate+0x27a/0x460 mm/gup.c:1800 mm_populate include/linux/mm.h:3411 [inline] vm_mmap_pgoff+0x305/0x420 mm/util.c:578 ksys_mmap_pgoff+0x504/0x6e0 mm/mmap.c:1431 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f0e7f47dea9 RSP: 002b:00007f0e7efff0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000009 RAX: ffffffffffffffda RBX: 00007f0e7f5abf80 RCX: 00007f0e7f47dea9 RDX: 00000000027fffff RSI: 0000000000600000 RDI: 0000000020000000 RBP: 00007f0e7f4ca4a4 R08: 0000000000000006 R09: 0000000000000000 R10: 0000000004002011 R11: 0000000000000246 R12: 0000000000000000 R13: 000000000000000b R14: 00007f0e7f5abf80 R15: 00007ffe433d36e8 Showing all locks held in the system: 3 locks held by kworker/u8:0/10: 5 locks held by kworker/u8:1/11: 3 locks held by kworker/1:0/24: #0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3229 [inline] #0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3335 #1: ffffc900001e7d00 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3230 [inline] #1: ffffc900001e7d00 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3335 #2: ffff8880204b9240 (&data->fib_lock){+.+.}-{3:3}, at: nsim_fib_event_work+0x2d1/0x4130 drivers/net/netdevsim/fib.c:1489 1 lock held by khungtaskd/29: #0: ffffffff8e334d20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline] #0: ffffffff8e334d20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline] #0: ffffffff8e334d20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614 3 locks held by kworker/1:1/44: #0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3229 [inline] #0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3335 #1: ffffc90000b47d00 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3230 [inline] #1: ffffc90000b47d00 ((work_completion)(&data->fib_event_work)){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3335 #2: ffff88807cf36240 (&data->fib_lock){+.+.}-{3:3}, at: nsim_fib_event_work+0x2d1/0x4130 drivers/net/netdevsim/fib.c:1489 2 locks held by dhcpcd/4738: #0: ffff888022dba678 (nlk_cb_mutex-ROUTE){+.+.}-{3:3}, at: netlink_dump+0xcb/0xe50 net/netlink/af_netlink.c:2209 #1: ffffffff8f594a48 (rtnl_mutex){+.+.}-{3:3}, at: netlink_dump+0x5d3/0xe50 net/netlink/af_netlink.c:2268 2 locks held by getty/4827: #0: ffff88802f1010a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc900031332f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2201 3 locks held by kworker/u8:9/10888: 3 locks held by kworker/1:2/11028: #0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3229 [inline] #0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3335 #1: ffffc900041afd00 (fqdir_free_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3230 [inline] #1: ffffc900041afd00 (fqdir_free_work){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3335 #2: ffffffff8e339f80 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x4c/0x550 kernel/rcu/tree.c:4073 1 lock held by syz-executor.1/13537: 1 lock held by syz-executor.1/15948: #0: ffff88801d4a14c8 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:850 [inline] #0: ffff88801d4a14c8 (mapping.invalidate_lock#2){++++}-{3:3}, at: page_cache_ra_unbounded+0xfb/0x7a0 mm/readahead.c:225 1 lock held by syz-executor.1/15949: #0: ffff88801d4a14c8 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:850 [inline] #0: ffff88801d4a14c8 (mapping.invalidate_lock#2){++++}-{3:3}, at: page_cache_ra_unbounded+0xfb/0x7a0 mm/readahead.c:225 1 lock held by syz-executor.0/16332: #0: ffff88801d4a14c8 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock include/linux/fs.h:840 [inline] #0: ffff88801d4a14c8 (mapping.invalidate_lock#2){++++}-{3:3}, at: blkdev_fallocate+0x233/0x550 block/fops.c:797 1 lock held by syz-executor.3/16696: #0: ffff88801d4a14c8 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock include/linux/fs.h:840 [inline] #0: ffff88801d4a14c8 (mapping.invalidate_lock#2){++++}-{3:3}, at: blkdev_fallocate+0x233/0x550 block/fops.c:797 1 lock held by syz-executor.2/17028: 3 locks held by syz-executor.3/17464: #0: ffffffff8f594a48 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline] #0: ffffffff8f594a48 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x842/0x10d0 net/core/rtnetlink.c:6592 #1: ffff888052e91408 (&wg->device_update_lock){+.+.}-{3:3}, at: wg_open+0x22d/0x420 drivers/net/wireguard/device.c:50 #2: ffffffff8e33a0b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline] #2: ffffffff8e33a0b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x39a/0x820 kernel/rcu/tree_exp.h:939 1 lock held by syz-executor.4/17604: #0: ffffffff8f594a48 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline] #0: ffffffff8f594a48 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x842/0x10d0 net/core/rtnetlink.c:6592 ============================================= NMI backtrace for cpu 0 CPU: 0 PID: 29 Comm: khungtaskd Not tainted 6.9.0-rc5-syzkaller-00159-gc942a0cd3603 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline] watchdog+0xfde/0x1020 kernel/hung_task.c:380 kthread+0x2f0/0x390 kernel/kthread.c:388 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 CPU: 1 PID: 10888 Comm: kworker/u8:9 Not tainted 6.9.0-rc5-syzkaller-00159-gc942a0cd3603 #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 Workqueue: netns cleanup_net RIP: 0010:match_held_lock+0x77/0xb0 kernel/locking/lockdep.c:5231 Code: c7 c2 00 64 be 92 48 29 d0 48 c1 f8 03 48 ba 29 5c 8f c2 f5 28 5c 8f 48 0f af d0 bd 01 00 00 00 48 39 ca 74 02 31 ed 89 e8 5b <5d> c3 cc cc cc cc 90 e8 5d 97 2b f9 85 c0 74 22 83 3d 82 b1 30 04 RSP: 0018:ffffc900032ff8d0 EFLAGS: 00000046 RAX: 0000000000000001 RBX: 0000000000000003 RCX: ffffc900032ff903 RDX: 1ffff9200065ff2c RSI: ffffffff8e334d20 RDI: ffff88801fae6550 RBP: 0000000000000001 R08: ffffffff8fa7d2af R09: 1ffffffff1f4fa55 R10: dffffc0000000000 R11: fffffbfff1f4fa56 R12: 0000000000000003 R13: 000000000000000f R14: ffff88801fae64d8 R15: ffff88801fae6550 FS: 0000000000000000(0000) GS:ffff8880b9500000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 000055e8a1ae0000 CR3: 000000006d9ce000 CR4: 00000000003506f0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: find_held_lock kernel/locking/lockdep.c:5244 [inline] __lock_release kernel/locking/lockdep.c:5429 [inline] lock_release+0x255/0x9f0 kernel/locking/lockdep.c:5774 rcu_lock_release include/linux/rcupdate.h:339 [inline] rcu_read_unlock include/linux/rcupdate.h:814 [inline] cond_resched_rcu+0x9b/0x170 include/linux/rcupdate_wait.h:62 ip_vs_conn_flush net/netfilter/ipvs/ip_vs_conn.c:1393 [inline] ip_vs_conn_net_cleanup+0x3a3/0x560 net/netfilter/ipvs/ip_vs_conn.c:1475 __ip_vs_cleanup_batch+0x74/0x100 net/netfilter/ipvs/ip_vs_core.c:2347 ops_exit_list net/core/net_namespace.c:175 [inline] cleanup_net+0x89d/0xcc0 net/core/net_namespace.c:637 process_one_work kernel/workqueue.c:3254 [inline] process_scheduled_works+0xa10/0x17c0 kernel/workqueue.c:3335 worker_thread+0x86d/0xd70 kernel/workqueue.c:3416 kthread+0x2f0/0x390 kernel/kthread.c:388 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244