syzbot


INFO: task hung in flush_rcu_work

Status: premoderation: reported on 2025/07/05 04:50
Reported-by: syzbot+4ab0b7fd0138da13bd14@syzkaller.appspotmail.com
First crash: 55d, last: 16d

Sample crash report:
INFO: task syz.0.422:1642 blocked for more than 123 seconds.
      Not tainted 6.12.38-syzkaller-gaffdb774d7ec #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.422       state:D stack:0     pid:1642  tgid:1641  ppid:288    flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5945 [inline]
 __schedule+0x1322/0x1df0 kernel/sched/core.c:7791
 __schedule_loop kernel/sched/core.c:7872 [inline]
 schedule+0xc6/0x240 kernel/sched/core.c:7887
 schedule_timeout+0xb2/0x3a0 kernel/time/timer.c:2595
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common kernel/sched/completion.c:116 [inline]
 wait_for_common+0x359/0x630 kernel/sched/completion.c:127
 wait_for_completion+0x1c/0x40 kernel/sched/completion.c:148
 rcu_barrier+0x415/0x530 kernel/rcu/tree.c:4657
 flush_rcu_work+0x71/0x90 kernel/workqueue.c:4292
 kvfree_rcu_barrier+0x18c/0x2f0 kernel/rcu/tree.c:3936
 kmem_cache_destroy+0x32/0x170 mm/slab_common.c:490
 p9_client_destroy+0x42b/0x480 net/9p/client.c:1088
 v9fs_session_close+0x52/0x1d0 fs/9p/v9fs.c:506
 v9fs_kill_super+0x60/0x90 fs/9p/vfs_super.c:196
 deactivate_locked_super+0xd5/0x2a0 fs/super.c:476
 deactivate_super+0xb8/0xe0 fs/super.c:509
 cleanup_mnt+0x3f1/0x480 fs/namespace.c:1370
 __cleanup_mnt+0x1d/0x40 fs/namespace.c:1377
 task_work_run+0x1e0/0x250 kernel/task_work.c:240
 resume_user_mode_work+0x36/0x50 include/linux/resume_user_mode.h:50
 exit_to_user_mode_loop kernel/entry/common.c:114 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline]
 __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
 syscall_exit_to_user_mode+0x64/0xb0 kernel/entry/common.c:218
 do_syscall_64+0x64/0xf0 arch/x86/entry/common.c:89
 entry_SYSCALL_64_after_hwframe+0x76/0x7e
RIP: 0033:0x7fdf4618ebe9
RSP: 002b:00007fdf470a6038 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: fffffffffffffffe RBX: 00007fdf463b5fa0 RCX: 00007fdf4618ebe9
RDX: 0000200000000040 RSI: 0000200000000000 RDI: 0000000000000000
RBP: 00007fdf46211e19 R08: 0000200000000280 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fdf463b6038 R14: 00007fdf463b5fa0 R15: 00007fff591d0d08
 </TASK>
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 37 Comm: khungtaskd Not tainted 6.12.38-syzkaller-gaffdb774d7ec #0 e0a1c643210c02f57d3116610c17ac272852aef3
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Call Trace:
 <TASK>
 __dump_stack+0x21/0x30 lib/dump_stack.c:94
 dump_stack_lvl+0x10c/0x190 lib/dump_stack.c:120
 dump_stack+0x19/0x20 lib/dump_stack.c:129
 nmi_cpu_backtrace+0x2bf/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x142/0x2c0 lib/nmi_backtrace.c:62
 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:41
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:267 [inline]
 watchdog+0xd8f/0xed0 kernel/hung_task.c:423
 kthread+0x2ca/0x370 kernel/kthread.c:389
 ret_from_fork+0x67/0xa0 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 20 Comm: rcuop/0 Not tainted 6.12.38-syzkaller-gaffdb774d7ec #0 e0a1c643210c02f57d3116610c17ac272852aef3
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
RIP: 0010:unwind_get_return_address_ptr arch/x86/kernel/unwind_frame.c:28 [inline]
RIP: 0010:update_stack_state+0x388/0x4b0 arch/x86/kernel/unwind_frame.c:251
Code: 00 48 8b 45 d0 49 89 06 48 8b 45 98 42 80 3c 20 00 4c 8b 75 c8 4c 8b 6d c0 74 08 4c 89 f7 e8 9f de 97 00 49 c7 06 00 00 00 00 <48> 83 45 d0 08 eb 0c 48 83 e8 80 48 89 45 d0 4c 8b 6d c0 48 8b 45
RSP: 0018:ffffc90000007018 EFLAGS: 00000246
RAX: 1ffff92000000e38 RBX: ffffc90000007168 RCX: ffffc90000007501
RDX: ffffc900000075d0 RSI: 1ffff92000000e2e RDI: ffffc900000071c0
RBP: ffffc900000070d8 R08: ffffc90000007230 R09: ffffc90000007228
R10: 0000000000000004 R11: ffffffff81743d30 R12: dffffc0000000000
R13: ffffc90000007190 R14: ffffc900000071c0 R15: 1ffff92000000e35
FS:  0000000000000000(0000) GS:ffff8881f6e00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffd81f68f9c CR3: 000000012bddc000 CR4: 00000000003526b0
Call Trace:
 <IRQ>
 unwind_next_frame+0x3c2/0x750 arch/x86/kernel/unwind_frame.c:315
 arch_stack_walk+0x139/0x170 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x9d/0xe0 kernel/stacktrace.c:122
 kasan_save_stack mm/kasan/common.c:47 [inline]
 kasan_save_track+0x3e/0x80 mm/kasan/common.c:68
 kasan_save_alloc_info+0x40/0x50 mm/kasan/generic.c:565
 unpoison_slab_object mm/kasan/common.c:319 [inline]
 __kasan_slab_alloc+0x73/0x90 mm/kasan/common.c:345
 kasan_slab_alloc include/linux/kasan.h:250 [inline]
 slab_post_alloc_hook mm/slub.c:4169 [inline]
 slab_alloc_node mm/slub.c:4218 [inline]
 kmem_cache_alloc_node_noprof+0x139/0x3b0 mm/slub.c:4272
 kmalloc_reserve+0xcf/0x500 net/core/skbuff.c:596
 __alloc_skb+0x144/0x370 net/core/skbuff.c:687
 alloc_skb include/linux/skbuff.h:1331 [inline]
 nlmsg_new include/net/netlink.h:1015 [inline]
 fdb_notify+0x78/0x150 net/bridge/br_fdb.c:195
 br_fdb_update+0x4b7/0x680 net/bridge/br_fdb.c:941
 br_handle_frame_finish+0x39c/0x1720 net/bridge/br_input.c:141
 nf_hook_bridge_pre net/bridge/br_input.c:301 [inline]
 br_handle_frame+0x5a6/0xba0 net/bridge/br_input.c:424
 __netif_receive_skb_core+0xf48/0x3940 net/core/dev.c:5651
 __netif_receive_skb_one_core net/core/dev.c:5755 [inline]
 __netif_receive_skb net/core/dev.c:5870 [inline]
 process_backlog+0x3e5/0xae0 net/core/dev.c:6206
 __napi_poll+0xd3/0x610 net/core/dev.c:6857
 napi_poll net/core/dev.c:6926 [inline]
 net_rx_action+0x584/0xce0 net/core/dev.c:7048
 handle_softirqs+0x1ab/0x630 kernel/softirq.c:621
 __do_softirq+0xf/0x16 kernel/softirq.c:659
 do_softirq+0xa6/0x100 kernel/softirq.c:503
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x74/0x80 kernel/softirq.c:430
 local_bh_enable include/linux/bottom_half.h:33 [inline]
 rcu_do_batch+0x5c6/0xd20 kernel/rcu/tree.c:2594
 nocb_cb_wait kernel/rcu/tree_nocb.h:923 [inline]
 rcu_nocb_cb_kthread+0x4dc/0xac0 kernel/rcu/tree_nocb.h:957
 kthread+0x2ca/0x370 kernel/kthread.c:389
 ret_from_fork+0x67/0xa0 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
net_ratelimit: 137534 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:ce:8a:42:57:9a:b8, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:ce:8a:42:57:9a:b8, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:ce:8a:42:57:9a:b8, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
net_ratelimit: 157559 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:ce:8a:42:57:9a:b8, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:ce:8a:42:57:9a:b8, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)

Crashes (6):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/08/14 01:11 android16-6.12 affdb774d7ec 22ec1469 .config console log report info [disk image] [vmlinux] [kernel image] ci2-android-6-12-rust INFO: task hung in flush_rcu_work
2025/08/08 03:49 android16-6.12 209015b548fb 6a893178 .config console log report info [disk image] [vmlinux] [kernel image] ci2-android-6-12-rust INFO: task hung in flush_rcu_work
2025/07/31 09:00 android16-6.12 cab1c944469e f8f2b4da .config console log report info [disk image] [vmlinux] [kernel image] ci2-android-6-12-rust INFO: task hung in flush_rcu_work
2025/07/28 08:46 android16-6.12 e9bbc29c066a fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci2-android-6-12-rust INFO: task hung in flush_rcu_work
2025/07/20 19:08 android16-6.12 73009db42b37 7117feec .config console log report info [disk image] [vmlinux] [kernel image] ci2-android-6-12-rust INFO: task hung in flush_rcu_work
2025/07/05 04:49 android16-6.12 e2bf362ee23b d869b261 .config console log report info [disk image] [vmlinux] [kernel image] ci2-android-6-12-rust INFO: task hung in flush_rcu_work
* Struck through repros no longer work on HEAD.