INFO: task udevd:5831 blocked for more than 143 seconds.
Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:udevd state:D stack:22760 pid:5831 tgid:5831 ppid:5200 task_flags:0x400140 flags:0x00080001
Call Trace:
context_switch kernel/sched/core.c:5325 [inline]
__schedule+0x1798/0x4cc0 kernel/sched/core.c:6929
__schedule_loop kernel/sched/core.c:7011 [inline]
schedule+0x165/0x360 kernel/sched/core.c:7026
schedule_timeout+0x12b/0x270 kernel/time/sleep_timeout.c:99
wait_for_reconnect drivers/block/nbd.c:1104 [inline]
nbd_handle_cmd drivers/block/nbd.c:1146 [inline]
nbd_queue_rq+0x662/0xf10 drivers/block/nbd.c:1204
blk_mq_dispatch_rq_list+0x4c0/0x1900 block/blk-mq.c:2129
__blk_mq_do_dispatch_sched block/blk-mq-sched.c:168 [inline]
blk_mq_do_dispatch_sched block/blk-mq-sched.c:182 [inline]
__blk_mq_sched_dispatch_requests+0xda4/0x1570 block/blk-mq-sched.c:307
blk_mq_sched_dispatch_requests+0xd7/0x190 block/blk-mq-sched.c:329
blk_mq_run_hw_queue+0x348/0x4f0 block/blk-mq.c:2367
blk_mq_dispatch_list+0xd0c/0xe00 include/linux/spinlock.h:-1
blk_mq_flush_plug_list+0x469/0x550 block/blk-mq.c:2976
__blk_flush_plug+0x3d3/0x4b0 block/blk-core.c:1225
blk_finish_plug block/blk-core.c:1252 [inline]
__submit_bio+0x2d3/0x5a0 block/blk-core.c:651
__submit_bio_noacct_mq block/blk-core.c:724 [inline]
submit_bio_noacct_nocheck+0x2fb/0xa50 block/blk-core.c:755
submit_bh fs/buffer.c:2829 [inline]
block_read_full_folio+0x599/0x830 fs/buffer.c:2447
filemap_read_folio+0x117/0x380 mm/filemap.c:2444
do_read_cache_folio+0x350/0x590 mm/filemap.c:4024
read_mapping_folio include/linux/pagemap.h:999 [inline]
read_part_sector+0xb6/0x2b0 block/partitions/core.c:722
adfspart_check_ICS+0xa4/0xa50 block/partitions/acorn.c:360
check_partition block/partitions/core.c:141 [inline]
blk_add_partitions block/partitions/core.c:589 [inline]
bdev_disk_changed+0x75f/0x14b0 block/partitions/core.c:693
blkdev_get_whole+0x380/0x510 block/bdev.c:748
bdev_open+0x31e/0xd30 block/bdev.c:957
blkdev_open+0x457/0x600 block/fops.c:701
do_dentry_open+0x953/0x13f0 fs/open.c:965
vfs_open+0x3b/0x340 fs/open.c:1097
do_open fs/namei.c:3975 [inline]
path_openat+0x2ee5/0x3830 fs/namei.c:4134
do_filp_open+0x1fa/0x410 fs/namei.c:4161
do_sys_openat2+0x121/0x1c0 fs/open.c:1437
do_sys_open fs/open.c:1452 [inline]
__do_sys_openat fs/open.c:1468 [inline]
__se_sys_openat fs/open.c:1463 [inline]
__x64_sys_openat+0x138/0x170 fs/open.c:1463
do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb2aa8a7407
RSP: 002b:00007ffdcc4bde40 EFLAGS: 00000202 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007fb2aaf54880 RCX: 00007fb2aa8a7407
RDX: 00000000000a0800 RSI: 000055bc03eca3f0 RDI: ffffffffffffff9c
RBP: 000055bc03ec0910 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 000055bc03ed5830
R13: 000055bc03ece190 R14: 0000000000000000 R15: 000055bc03ed5830
Showing all locks held in the system:
3 locks held by kworker/0:1/10:
#0: ffff88801a055948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
#0: ffff88801a055948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
#1: ffffc900000f7ba0 (drain_vmap_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
#1: ffffc900000f7ba0 (drain_vmap_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
#2: ffffffff8e0429c8 (vmap_purge_lock){+.+.}-{4:4}, at: drain_vmap_area_work+0x17/0x40 mm/vmalloc.c:2395
1 lock held by khungtaskd/31:
#0: ffffffff8df3d2e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
#0: ffffffff8df3d2e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
#0: ffffffff8df3d2e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
2 locks held by getty/5592:
#0: ffff88803375d0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
#1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
3 locks held by udevd/5831:
#0: ffff888024f53358 (&disk->open_mutex){+.+.}-{4:4}, at: bdev_open+0xe0/0xd30 block/bdev.c:945
#1: ffff88802425f790 (set->srcu){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:161 [inline]
#1: ffff88802425f790 (set->srcu){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:253 [inline]
#1: ffff88802425f790 (set->srcu){.+.+}-{0:0}, at: blk_mq_run_hw_queue+0x31f/0x4f0 block/blk-mq.c:2367
#2: ffff88802503e178 (&cmd->lock){+.+.}-{4:4}, at: nbd_queue_rq+0xc8/0xf10 drivers/block/nbd.c:1196
3 locks held by kworker/0:9/12515:
1 lock held by syz.2.3437/18437:
#0: ffffffff8df42c40 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3820
=============================================
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline]
watchdog+0xf60/0xfa0 kernel/hung_task.c:495
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 12515 Comm: kworker/0:9 Not tainted syzkaller #0 PREEMPT(full)
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Workqueue: events purge_vmap_node
RIP: 0010:arch_atomic_read arch/x86/include/asm/atomic.h:23 [inline]
RIP: 0010:raw_atomic_read include/linux/atomic/atomic-arch-fallback.h:457 [inline]
RIP: 0010:rcu_is_watching_curr_cpu include/linux/context_tracking.h:128 [inline]
RIP: 0010:rcu_is_watching+0x5a/0xb0 kernel/rcu/tree.c:751
Code: f0 48 c1 e8 03 42 80 3c 38 00 74 08 4c 89 f7 e8 1c 6b 7f 00 48 c7 c3 d8 4f 6f 92 49 03 1e 48 89 d8 48 c1 e8 03 42 0f b6 04 38 <84> c0 75 34 8b 03 65 ff 0d 19 d2 c7 10 74 11 83 e0 04 c1 e8 02 5b
RSP: 0018:ffffc90011887338 EFLAGS: 00000a02
RAX: 0000000000000000 RBX: ffff8880b8832fd8 RCX: 258d2f870df4d200
RDX: 0000000000000000 RSI: ffffffff8bbf0440 RDI: ffffffff8bbf0400
RBP: dffffc0000000000 R08: 0000000000000000 R09: ffffffff81738d25
R10: ffffc90011887478 R11: ffffffff81ac2cb0 R12: 1ffff92002310e85
R13: ffffc90011887460 R14: ffffffff8d92cdd0 R15: dffffc0000000000
FS: 0000000000000000(0000) GS:ffff88812613e000(0000) knlGS:0000000000000000
CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fb9439b4198 CR3: 0000000078650000 CR4: 00000000003526f0
Call Trace:
rcu_read_lock include/linux/rcupdate.h:868 [inline]
class_rcu_constructor include/linux/rcupdate.h:1195 [inline]
unwind_next_frame+0xd4/0x2390 arch/x86/kernel/unwind_orc.c:479
arch_stack_walk+0x11c/0x150 arch/x86/kernel/stacktrace.c:25
stack_trace_save+0x9c/0xe0 kernel/stacktrace.c:122
save_stack+0xf5/0x1f0 mm/page_owner.c:156
__reset_page_owner+0x71/0x1f0 mm/page_owner.c:311
reset_page_owner include/linux/page_owner.h:25 [inline]
free_pages_prepare mm/page_alloc.c:1394 [inline]
__free_frozen_pages+0xbc4/0xd30 mm/page_alloc.c:2906
kasan_depopulate_vmalloc_pte+0x6d/0x90 mm/kasan/shadow.c:495
apply_to_pte_range mm/memory.c:3142 [inline]
apply_to_pmd_range mm/memory.c:3186 [inline]
apply_to_pud_range mm/memory.c:3222 [inline]
apply_to_p4d_range mm/memory.c:3258 [inline]
__apply_to_page_range+0xb66/0x13d0 mm/memory.c:3294
kasan_release_vmalloc+0xa2/0xd0 mm/kasan/shadow.c:616
kasan_release_vmalloc_node mm/vmalloc.c:2255 [inline]
purge_vmap_node+0x214/0x8f0 mm/vmalloc.c:2272
process_one_work kernel/workqueue.c:3263 [inline]
process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3346
worker_thread+0x8a0/0xda0 kernel/workqueue.c:3427
kthread+0x711/0x8a0 kernel/kthread.c:463
ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog
GRED: Unable to relocate VQ 0x0 after dequeue, screwing up backlog