INFO: task udevd:7842 blocked for more than 145 seconds. Not tainted syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:udevd state:D stack:25712 pid:7842 ppid:5161 flags:0x00004006 Call Trace: context_switch kernel/sched/core.c:5380 [inline] __schedule+0x14d2/0x44d0 kernel/sched/core.c:6699 schedule+0xbd/0x170 kernel/sched/core.c:6773 io_schedule+0x80/0xd0 kernel/sched/core.c:9022 folio_wait_bit_common+0x6eb/0xf70 mm/filemap.c:1329 folio_put_wait_locked mm/filemap.c:1493 [inline] do_read_cache_folio+0x1c0/0x7e0 mm/filemap.c:3771 read_mapping_folio include/linux/pagemap.h:898 [inline] read_part_sector+0xd2/0x350 block/partitions/core.c:718 adfspart_check_POWERTEC+0x8d/0xf00 block/partitions/acorn.c:454 check_partition block/partitions/core.c:138 [inline] blk_add_partitions block/partitions/core.c:600 [inline] bdev_disk_changed+0x73a/0x1410 block/partitions/core.c:689 blkdev_get_whole+0x30d/0x390 block/bdev.c:672 blkdev_get_by_dev+0x279/0x600 block/bdev.c:814 blkdev_open+0x152/0x360 block/fops.c:589 do_dentry_open+0x8c6/0x1500 fs/open.c:929 do_open fs/namei.c:3640 [inline] path_openat+0x274b/0x3190 fs/namei.c:3797 do_filp_open+0x1c5/0x3d0 fs/namei.c:3824 do_sys_openat2+0x12c/0x1c0 fs/open.c:1419 do_sys_open fs/open.c:1434 [inline] __do_sys_openat fs/open.c:1450 [inline] __se_sys_openat fs/open.c:1445 [inline] __x64_sys_openat+0x139/0x160 fs/open.c:1445 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x68/0xd2 RIP: 0033:0x7f937cfdc407 RSP: 002b:00007ffdd00c2970 EFLAGS: 00000202 ORIG_RAX: 0000000000000101 RAX: ffffffffffffffda RBX: 00007f937cf50880 RCX: 00007f937cfdc407 RDX: 00000000000a0800 RSI: 000055604f28c2d0 RDI: ffffffffffffff9c RBP: 000055604f28b910 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000202 R12: 000055604f2bc750 R13: 000055604f2a3410 R14: 0000000000000000 R15: 000055604f2bc750 Showing all locks held in the system: 2 locks held by kworker/1:1/27: 1 lock held by khungtaskd/29: #0: ffffffff8cd2fee0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline] #0: ffffffff8cd2fee0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline] #0: ffffffff8cd2fee0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x290 kernel/locking/lockdep.c:6633 3 locks held by kworker/1:3/1195: 2 locks held by getty/5554: #0: ffff8880315b20a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x425/0x1380 drivers/tty/n_tty.c:2217 3 locks held by kworker/1:7/7028: #0: ffff8880b8e3c218 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0xad/0x140 kernel/sched/core.c:566 #1: ffff8880b8f289c0 (psi_seq){-.-.}-{0:0}, at: psi_sched_switch kernel/sched/stats.h:189 [inline] #1: ffff8880b8f289c0 (psi_seq){-.-.}-{0:0}, at: __schedule+0x20ee/0x44d0 kernel/sched/core.c:6694 #2: ffffffff8cd2fee0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:334 [inline] #2: ffffffff8cd2fee0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:786 [inline] #2: ffffffff8cd2fee0 (rcu_read_lock){....}-{1:2}, at: gc_worker+0x269/0x14b0 net/netfilter/nf_conntrack_core.c:1502 1 lock held by udevd/7842: #0: ffff8880219ca4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x121/0x600 block/bdev.c:805 1 lock held by syz.4.859/10375: #0: ffffffff8cd358b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline] #0: ffffffff8cd358b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x360/0x830 kernel/rcu/tree_exp.h:1004 3 locks held by kvm-nx-lpage-re/10386: #0: ffffffff8cd59848 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_lock include/linux/cgroup.h:369 [inline] #0: ffffffff8cd59848 (cgroup_mutex){+.+.}-{3:3}, at: cgroup_attach_task_all+0x26/0xe0 kernel/cgroup/cgroup-v1.c:61 #1: ffffffff8cbcb1d0 (cpu_hotplug_lock){++++}-{0:0}, at: cgroup_attach_lock+0x11/0x30 kernel/cgroup/cgroup.c:2414 #2: ffffffff8cd59a30 (cgroup_threadgroup_rwsem){++++}-{0:0}, at: cgroup_attach_task_all+0x30/0xe0 kernel/cgroup/cgroup-v1.c:62 ============================================= NMI backtrace for cpu 1 CPU: 1 PID: 29 Comm: khungtaskd Not tainted syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 Call Trace: dump_stack_lvl+0x16c/0x230 lib/dump_stack.c:106 nmi_cpu_backtrace+0x39b/0x3d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x17a/0x2f0 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline] watchdog+0xf41/0xf80 kernel/hung_task.c:379 kthread+0x2fa/0x390 kernel/kthread.c:388 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293 Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 CPU: 0 PID: 5150 Comm: klogd Not tainted syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025 RIP: 0010:get_current arch/x86/include/asm/current.h:41 [inline] RIP: 0010:check_stack_object+0x10/0xa0 mm/usercopy.c:39 Code: ff e9 e5 fc ff ff e8 7f a9 9b ff e9 c1 fd ff ff 66 2e 0f 1f 84 00 00 00 00 00 41 57 41 56 53 49 89 f6 48 89 fb e8 60 a9 9b ff <65> 4c 8b 3d 88 d1 19 7e 49 83 c7 20 4c 89 f8 48 c1 e8 03 48 b9 00 RSP: 0018:ffffc900032c7a18 EFLAGS: 00000293 RAX: ffffffff81e9e630 RBX: ffff888030973400 RCX: ffff88807d455a00 RDX: 0000000000000000 RSI: 0000000000000039 RDI: ffff888030973400 RBP: 0000000000000000 R08: ffff88807da10a77 R09: 1ffff1100fb4214e R10: dffffc0000000000 R11: ffffed100fb4214f R12: 0000000000000039 R13: ffff888030973400 R14: 0000000000000039 R15: ffffffffffffffc7 FS: 00007f986fee8c80(0000) GS:ffff8880b8e00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 00007ffeea36da6c CR3: 000000007df61000 CR4: 00000000003526f0 Call Trace: __check_object_size+0x62/0xa30 mm/usercopy.c:226 check_object_size include/linux/thread_info.h:215 [inline] check_copy_size include/linux/thread_info.h:251 [inline] copy_from_iter include/linux/uio.h:208 [inline] skb_copy_datagram_from_iter+0xe4/0x6e0 net/core/datagram.c:561 unix_dgram_sendmsg+0x652/0x1720 net/unix/af_unix.c:2014 sock_sendmsg_nosec net/socket.c:730 [inline] __sock_sendmsg net/socket.c:745 [inline] __sys_sendto+0x46a/0x620 net/socket.c:2201 __do_sys_sendto net/socket.c:2213 [inline] __se_sys_sendto net/socket.c:2209 [inline] __x64_sys_sendto+0xde/0xf0 net/socket.c:2209 do_syscall_x64 arch/x86/entry/common.c:51 [inline] do_syscall_64+0x55/0xb0 arch/x86/entry/common.c:81 entry_SYSCALL_64_after_hwframe+0x68/0xd2 RIP: 0033:0x7f9870038407 Code: 48 89 fa 4c 89 df e8 38 aa 00 00 8b 93 08 03 00 00 59 5e 48 83 f8 fc 74 1a 5b c3 0f 1f 84 00 00 00 00 00 48 8b 44 24 10 0f 05 <5b> c3 0f 1f 80 00 00 00 00 83 e2 39 83 fa 08 75 de e8 23 ff ff ff RSP: 002b:00007ffde340aa90 EFLAGS: 00000202 ORIG_RAX: 000000000000002c RAX: ffffffffffffffda RBX: 00007f986fee8c80 RCX: 00007f9870038407 RDX: 0000000000000039 RSI: 00007ffde340abd0 RDI: 0000000000000003 RBP: 00007ffde340b000 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000004000 R11: 0000000000000202 R12: 00007ffde340b018 R13: 00007ffde340abd0 R14: 000000000000001e R15: 00007ffde340abd0