syzbot


INFO: task hung in blk_freeze_queue

Status: premoderation: reported on 2025/07/19 10:05
Reported-by: syzbot+72c30d74dc04ac3a1ef2@syzkaller.appspotmail.com
First crash: 40d, last: 40d
Similar bugs (6)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in blk_freeze_queue (4) block 1 2 852d 854d 0/29 auto-obsoleted due to no activity on 2023/07/28 15:48
upstream INFO: task hung in blk_freeze_queue block 1 C 188 2608d 2759d 8/29 fixed on 2018/07/09 18:05
upstream INFO: task hung in blk_freeze_queue (3) arm 1 C 8 1013d 1093d 22/29 fixed on 2023/02/24 13:50
upstream INFO: task hung in blk_freeze_queue (5) block 1 1 517d 517d 0/29 auto-obsoleted due to no activity on 2024/06/27 12:04
upstream INFO: task hung in blk_freeze_queue (2) block 1 1 2510d 2510d 0/29 auto-closed as invalid on 2019/04/12 00:55
linux-6.1 INFO: task hung in blk_freeze_queue origin:upstream 1 C error 1 352d 812d 0/3 upstream: reported C repro on 2023/06/08 14:42

Sample crash report:
INFO: task syz.3.2759:7863 blocked for more than 122 seconds.
      Not tainted 6.12.30-syzkaller-g73009db42b37 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.2759      state:D stack:0     pid:7863  tgid:7861  ppid:6035   flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5945 [inline]
 __schedule+0x132b/0x1e00 kernel/sched/core.c:7789
 __schedule_loop kernel/sched/core.c:7870 [inline]
 schedule+0xc6/0x240 kernel/sched/core.c:7885
 blk_mq_freeze_queue_wait+0xe6/0x170 block/blk-mq.c:204
 blk_freeze_queue+0xc7/0x100 block/blk-mq.c:231
 blk_mq_freeze_queue+0x19/0x30 block/blk-mq.c:240
 loop_set_block_size drivers/block/loop.c:1434 [inline]
 lo_simple_ioctl drivers/block/loop.c:1458 [inline]
 lo_ioctl+0x1087/0x20a0 drivers/block/loop.c:1521
 blkdev_ioctl+0x546/0x680 block/ioctl.c:693
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:907 [inline]
 __se_sys_ioctl+0x132/0x1b0 fs/ioctl.c:893
 __x64_sys_ioctl+0x7f/0xa0 fs/ioctl.c:893
 x64_sys_call+0x1878/0x2ee0 arch/x86/include/generated/asm/syscalls_64.h:17
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0x58/0xf0 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x76/0x7e
RIP: 0033:0x7fe45398e9a9
RSP: 002b:00007fe454835038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fe453bb5fa0 RCX: 00007fe45398e9a9
RDX: 0000000080000001 RSI: 0000000000004c09 RDI: 0000000000000003
RBP: 00007fe453a10d69 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fe453bb5fa0 R15: 00007ffee96242c8
 </TASK>
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 37 Comm: khungtaskd Not tainted 6.12.30-syzkaller-g73009db42b37 #0 e78244f41b0e2dd383d6ada64249d7f830c3c2f3
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 __dump_stack+0x21/0x30 lib/dump_stack.c:94
 dump_stack_lvl+0x10c/0x190 lib/dump_stack.c:120
 dump_stack+0x19/0x20 lib/dump_stack.c:129
 nmi_cpu_backtrace+0x2bf/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x142/0x2c0 lib/nmi_backtrace.c:62
 arch_trigger_cpumask_backtrace+0x14/0x20 arch/x86/kernel/apic/hw_nmi.c:41
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:267 [inline]
 watchdog+0xd8f/0xed0 kernel/hung_task.c:423
 kthread+0x2c7/0x370 kernel/kthread.c:389
 ret_from_fork+0x64/0xa0 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 33 Comm: rcuop/1 Not tainted 6.12.30-syzkaller-g73009db42b37 #0 e78244f41b0e2dd383d6ada64249d7f830c3c2f3
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
RIP: 0010:br_flood+0x59a/0x730 net/bridge/br_forward.c:209
Code: 00 00 00 e8 58 12 04 00 48 81 fb 00 f0 ff ff 0f 87 21 01 00 00 e8 b6 50 8c fc 48 89 5d c0 4c 89 f8 48 c1 e8 03 42 80 3c 28 00 <74> 08 4c 89 ff e8 7c 4b e2 fc 4d 8b 3f 4d 39 e7 74 7c e8 8f 50 8c
RSP: 0018:ffffc900000076a0 EFLAGS: 00000246
RAX: 1ffff11023372403 RBX: ffff888119b92000 RCX: ffff888103679300
RDX: 0000000000000100 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc90000007750 R08: 0000000000000001 R09: 0000000000000003
R10: 0000000000000002 R11: 0000000000000100 R12: ffff88810a3a0b80
R13: dffffc0000000000 R14: 0000000000000001 R15: ffff888119b92018
FS:  0000000000000000(0000) GS:ffff8881f6e00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f0675b5d060 CR3: 000000010d302000 CR4: 00000000003526b0
Call Trace:
 <IRQ>
 br_handle_frame_finish+0x12bb/0x1720 net/bridge/br_input.c:215
 nf_hook_bridge_pre net/bridge/br_input.c:301 [inline]
 br_handle_frame+0x5a6/0xba0 net/bridge/br_input.c:424
 __netif_receive_skb_core+0xf4b/0x3940 net/core/dev.c:5651
 __netif_receive_skb_one_core net/core/dev.c:5755 [inline]
 __netif_receive_skb net/core/dev.c:5870 [inline]
 process_backlog+0x3e5/0xae0 net/core/dev.c:6202
 __napi_poll+0xd0/0x610 net/core/dev.c:6853
 napi_poll net/core/dev.c:6922 [inline]
 net_rx_action+0x584/0xce0 net/core/dev.c:7044
 handle_softirqs+0x1ae/0x630 kernel/softirq.c:603
 __do_softirq+0xf/0x16 kernel/softirq.c:641
 do_softirq+0xa6/0x100 kernel/softirq.c:485
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x74/0x80 kernel/softirq.c:412
 local_bh_enable include/linux/bottom_half.h:33 [inline]
 rcu_do_batch+0x5c6/0xd20 kernel/rcu/tree.c:2586
 nocb_cb_wait kernel/rcu/tree_nocb.h:923 [inline]
 rcu_nocb_cb_kthread+0x4dc/0xac0 kernel/rcu/tree_nocb.h:957
 kthread+0x2c7/0x370 kernel/kthread.c:389
 ret_from_fork+0x64/0xa0 arch/x86/kernel/process.c:153
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
net_ratelimit: 91149 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:46:4b:da:c1:ab:f2, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:46:4b:da:c1:ab:f2, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/19 10:04 android16-6.12 73009db42b37 7117feec .config console log report info [disk image] [vmlinux] [kernel image] ci2-android-6-12-rust INFO: task hung in blk_freeze_queue
* Struck through repros no longer work on HEAD.