syzbot


INFO: task hung in read_part_sector (3)

Status: upstream: reported on 2025/07/28 11:29
Reported-by: syzbot+910d3e8c08500bfcbc85@syzkaller.appspotmail.com
First crash: 147d, last: 13d
Similar bugs (6)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in read_part_sector origin:upstream 1 C error 302 2h17m 877d 0/3 upstream: reported C repro on 2023/07/29 13:11
linux-6.1 INFO: task hung in read_part_sector 1 1 684d 684d 0/3 auto-obsoleted due to no activity on 2024/05/17 07:34
upstream INFO: task hung in read_part_sector block 1 2 700d 766d 0/29 auto-obsoleted due to no activity on 2024/04/21 10:47
linux-6.6 INFO: task hung in read_part_sector 1 215 15h40m 185d 0/2 upstream: reported on 2025/06/19 22:23
linux-6.1 INFO: task hung in read_part_sector (2) 1 1 562d 562d 0/3 auto-obsoleted due to no activity on 2024/09/16 08:59
upstream INFO: task hung in read_part_sector (2) block 1 syz error 8875 17m 504d 0/29 upstream: reported syz repro on 2024/08/04 19:22

Sample crash report:
INFO: task udevd:4751 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:udevd           state:D stack:23040 pid:4751  ppid:3636   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5244 [inline]
 __schedule+0x10ec/0x40b0 kernel/sched/core.c:6561
 schedule+0xb9/0x180 kernel/sched/core.c:6637
 io_schedule+0x7c/0xd0 kernel/sched/core.c:8797
 folio_wait_bit_common+0x6e1/0xf60 mm/filemap.c:1324
 folio_put_wait_locked mm/filemap.c:1493 [inline]
 do_read_cache_folio+0x1a9/0x760 mm/filemap.c:3641
 read_mapping_folio include/linux/pagemap.h:799 [inline]
 read_part_sector+0xce/0x350 block/partitions/core.c:724
 adfspart_check_POWERTEC+0xb4/0x870 block/partitions/acorn.c:454
 check_partition block/partitions/core.c:146 [inline]
 blk_add_partitions block/partitions/core.c:609 [inline]
 bdev_disk_changed+0x7bd/0x1480 block/partitions/core.c:695
 blkdev_get_whole+0x2e8/0x370 block/bdev.c:704
 blkdev_get_by_dev+0x32e/0xa60 block/bdev.c:841
 blkdev_open+0x11e/0x2e0 block/fops.c:500
 do_dentry_open+0x7e9/0x10d0 fs/open.c:882
 do_open fs/namei.c:3634 [inline]
 path_openat+0x25c6/0x2e70 fs/namei.c:3791
 do_filp_open+0x1c1/0x3c0 fs/namei.c:3818
 do_sys_openat2+0x142/0x490 fs/open.c:1320
 do_sys_open fs/open.c:1336 [inline]
 __do_sys_openat fs/open.c:1352 [inline]
 __se_sys_openat fs/open.c:1347 [inline]
 __x64_sys_openat+0x135/0x160 fs/open.c:1347
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:81
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f3468ca7407
RSP: 002b:00007ffef772dd70 EFLAGS: 00000202 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007f3469418880 RCX: 00007f3468ca7407
RDX: 00000000000a0800 RSI: 000055973f7ba430 RDI: ffffffffffffff9c
RBP: 000055973f7b9910 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000202 R12: 000055973f7cdba0
R13: 000055973f7d1410 R14: 0000000000000000 R15: 000055973f7cdba0
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8c92bab0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
2 locks held by rcu_tasks_trace/13:
 #0: ffffffff8c92c2d0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x33/0xf00 kernel/rcu/tasks.h:517
 #1: ffffffff8c930df8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #1: ffffffff8c930df8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x346/0x830 kernel/rcu/tree_exp.h:962
1 lock held by khungtaskd/28:
 #0: ffffffff8c92b120 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #0: ffffffff8c92b120 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #0: ffffffff8c92b120 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x51/0x290 kernel/locking/lockdep.c:6513
1 lock held by udevd/3636:
2 locks held by getty/4026:
 #0: ffff88814ccd5098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc9000327b2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x41b/0x1380 drivers/tty/n_tty.c:2198
3 locks held by kworker/1:3/4307:
 #0: ffff88814c555138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc900043d7d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #2: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xc4/0x14d0 net/ipv6/addrconf.c:4131
2 locks held by kworker/0:7/4373:
 #0: ffff888017472138 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc900049b7d00 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
1 lock held by udevd/4751:
 #0: ffff8880250fc4c8 (&disk->open_mutex){+.+.}-{3:3}, at: blkdev_get_by_dev+0x13d/0xa60 block/bdev.c:832
3 locks held by kworker/0:13/6915:
 #0: ffff88814c555138 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc90005157d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #2: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: addrconf_dad_work+0xc4/0x14d0 net/ipv6/addrconf.c:4131
3 locks held by kworker/0:14/6916:
 #0: ffff888017470938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc90005177d00 (fqdir_free_work){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #2: ffffffff8c930cc0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x48/0x600 kernel/rcu/tree.c:4023
6 locks held by kworker/u4:22/7608:
 #0: ffff888017616938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #1: ffffc90005057d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x7a1/0x1160 kernel/workqueue.c:2267
 #2: ffffffff8db2e6d0 (pernet_ops_rwsem){++++}-{3:3}, at: cleanup_net+0x132/0xb80 net/core/net_namespace.c:594
 #3: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: caif_exit_net+0x66/0x490 net/caif/caif_dev.c:527
 #4: ffff888063c82480 (&caifn->caifdevs.lock){+.+.}-{3:3}, at: caif_exit_net+0x79/0x490 net/caif/caif_dev.c:528
 #5: ffffffff8c930df8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #5: ffffffff8c930df8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x346/0x830 kernel/rcu/tree_exp.h:962
1 lock held by syz.5.935/9486:
 #0: ffff8880250fc4c8 (&disk->open_mutex){+.+.}-{3:3}, at: sync_bdevs+0x1ab/0x320 block/bdev.c:1071
1 lock held by syz-executor/9734:
 #0: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x740/0xed0 net/core/rtnetlink.c:6147
1 lock held by syz-executor/9832:
 #0: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x740/0xed0 net/core/rtnetlink.c:6147
1 lock held by syz-executor/9961:
 #0: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:74 [inline]
 #0: ffffffff8db3b3a8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x740/0xed0 net/core/rtnetlink.c:6147
1 lock held by cmp/10063:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 28 Comm: khungtaskd Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x168/0x22e lib/dump_stack.c:106
 nmi_cpu_backtrace+0x3f4/0x470 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1d4/0x450 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xeee/0xf30 kernel/hung_task.c:377
 kthread+0x29d/0x330 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 4416 Comm: kworker/u4:13 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Workqueue: phy27 ieee80211_iface_work
RIP: 0010:__should_failslab+0x0/0xf0 mm/failslab.c:18
Code: 89 f9 80 e1 07 80 c1 03 38 c1 0f 8c 64 fd ff ff 4c 89 ff e8 22 88 ff ff e9 57 fd ff ff 00 00 cc cc 00 00 cc cc 00 00 cc cc 00 <41> 57 41 56 41 54 53 89 f3 49 89 fe 49 bc 00 00 00 00 00 fc ff df
RSP: 0018:ffffc90004d6f5e8 EFLAGS: 00000246
RAX: 0000000004208060 RBX: ffffc90004d6f650 RCX: dffffc0000000000
RDX: ffffc90004d6f650 RSI: 0000000000000b20 RDI: ffff888017441dc0
RBP: 0000000000000b20 R08: 0000000000000b20 R09: ffffc90004d6fb10
R10: fffff520009adf66 R11: 1ffff920009adf62 R12: 0000000000000000
R13: 00000000ffffffff R14: ffff888017441dc0 R15: 0000000000000001
FS:  0000000000000000(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ff78551ae9c CR3: 000000000c68e000 CR4: 00000000003506e0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 should_failslab+0x5/0x20 mm/slab_common.c:1440
 slab_pre_alloc_hook+0x59/0x310 mm/slab.h:712
 slab_alloc_node mm/slub.c:3279 [inline]
 __kmem_cache_alloc_node+0x4f/0x260 mm/slub.c:3398
 __do_kmalloc_node mm/slab_common.c:935 [inline]
 __kmalloc+0xa0/0x240 mm/slab_common.c:949
 kmalloc include/linux/slab.h:568 [inline]
 kzalloc include/linux/slab.h:699 [inline]
 ieee802_11_parse_elems_full+0xb2/0x1230 net/mac80211/util.c:1503
 ieee802_11_parse_elems_crc net/mac80211/ieee80211_i.h:2246 [inline]
 ieee802_11_parse_elems net/mac80211/ieee80211_i.h:2253 [inline]
 ieee80211_rx_mgmt_probe_beacon net/mac80211/ibss.c:1605 [inline]
 ieee80211_ibss_rx_queued_mgmt+0x3c4/0x2b10 net/mac80211/ibss.c:1638
 ieee80211_iface_process_skb net/mac80211/iface.c:1679 [inline]
 ieee80211_iface_work+0x726/0xc80 net/mac80211/iface.c:1733
 process_one_work+0x898/0x1160 kernel/workqueue.c:2292
 worker_thread+0xaa2/0x1250 kernel/workqueue.c:2439
 kthread+0x29d/0x330 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295
 </TASK>

Crashes (10):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/12/09 01:33 linux-6.1.y 50cbba13faa2 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/12/08 07:56 linux-6.1.y 50cbba13faa2 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/11/30 08:37 linux-6.1.y f6e38ae624cf d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/11/30 08:35 linux-6.1.y f6e38ae624cf d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/11/30 08:34 linux-6.1.y f6e38ae624cf d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/11/05 02:25 linux-6.1.y f6e38ae624cf 686bf657 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/11/04 09:11 linux-6.1.y f6e38ae624cf 686bf657 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/10/25 08:51 linux-6.1.y 8e6e2188d949 c0460fcd .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/10/17 03:12 linux-6.1.y c2fda4b3f577 19568248 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
2025/07/28 11:29 linux-6.1.y 3594f306da12 fb8f743d .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in read_part_sector
* Struck through repros no longer work on HEAD.