syzbot


INFO: task hung in __lru_add_drain_all (2)

Status: upstream: reported syz repro on 2024/05/17 22:28
Subsystems: mm
[Documentation on labels]
Reported-by: syzbot+5294aa7d73bb0fa85bd0@syzkaller.appspotmail.com
First crash: 306d, last: 30d
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] [mm?] INFO: task hung in __lru_add_drain_all (2) 0 (2) 2024/12/18 06:31
[syzbot] Monthly mm report (Sep 2024) 0 (1) 2024/09/02 08:17
[syzbot] Monthly mm report (May 2024) 0 (1) 2024/05/31 06:48
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 INFO: task hung in __lru_add_drain_all 1 664d 664d 0/3 auto-obsoleted due to no activity on 2023/08/21 02:18
linux-6.1 INFO: task hung in __lru_add_drain_all 1 273d 273d 0/3 auto-obsoleted due to no activity on 2024/08/25 20:43
upstream INFO: task hung in __lru_add_drain_all net C done error 71 460d 1260d 0/28 auto-obsoleted due to no activity on 2024/02/20 10:46
linux-5.15 INFO: task hung in __lru_add_drain_all (2) 1 204d 204d 0/3 auto-obsoleted due to no activity on 2024/11/03 03:56
Last patch testing requests (1)
Created Duration User Patch Repo Result
2025/01/02 00:18 26m retest repro upstream OK log

Sample crash report:
INFO: task syz.1.11:5944 blocked for more than 143 seconds.
      Not tainted 6.13.0-rc7-syzkaller-00043-g619f0b6fad52 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.11        state:D stack:23024 pid:5944  tgid:5943  ppid:5826   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5369 [inline]
 __schedule+0x1850/0x4c30 kernel/sched/core.c:6756
 __schedule_loop kernel/sched/core.c:6833 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6848
 schedule_timeout+0xb0/0x290 kernel/time/sleep_timeout.c:75
 do_wait_for_common kernel/sched/completion.c:95 [inline]
 __wait_for_common kernel/sched/completion.c:116 [inline]
 wait_for_common kernel/sched/completion.c:127 [inline]
 wait_for_completion+0x355/0x620 kernel/sched/completion.c:148
 __flush_work+0xa47/0xc60 kernel/workqueue.c:4242
 __lru_add_drain_all+0x4f6/0x560 mm/swap.c:843
 invalidate_bdev+0x76/0xa0 block/bdev.c:101
 xfs_fs_fill_super+0x5cb/0x1590 fs/xfs/xfs_super.c:1825
 get_tree_bdev_flags+0x48e/0x5c0 fs/super.c:1636
 vfs_get_tree+0x92/0x2b0 fs/super.c:1814
 do_new_mount+0x2be/0xb40 fs/namespace.c:3511
 do_mount fs/namespace.c:3851 [inline]
 __do_sys_mount fs/namespace.c:4061 [inline]
 __se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4038
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f3e76b874ca
RSP: 002b:00007f3e7792de68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f3e7792def0 RCX: 00007f3e76b874ca
RDX: 00000000200000c0 RSI: 0000000020000100 RDI: 00007f3e7792deb0
RBP: 00000000200000c0 R08: 00007f3e7792def0 R09: 0000000004800802
R10: 0000000004800802 R11: 0000000000000246 R12: 0000000020000100
R13: 00007f3e7792deb0 R14: 000000000000982a R15: 0000000020000000
 </TASK>

Showing all locks held in the system:
2 locks held by kworker/0:0/8:
2 locks held by kworker/u8:0/11:
 #0: ffff88801ac81148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801ac81148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc90000107d00 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc90000107d00 (connector_reaper_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
1 lock held by khungtaskd/30:
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e937ae0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6744
3 locks held by kworker/u8:2/35:
 #0: ffff88814d743948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88814d743948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc90000ab7d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc90000ab7d00 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0xd0/0x16f0 net/ipv6/addrconf.c:4215
4 locks held by kworker/u8:3/36:
 #0: ffff88801baed948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801baed948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc90000ac7d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc90000ac7d00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffffffff8fca6a90 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x16a/0xd50 net/core/net_namespace.c:602
 #3: ffff88805611d4e8 (&wg->device_update_lock){+.+.}-{4:4}, at: wg_destruct+0x110/0x2e0 drivers/net/wireguard/device.c:249
3 locks held by kworker/0:2/57:
2 locks held by kworker/u8:4/61:
 #0: ffff88801ac81148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801ac81148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc9000212fd00 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc9000212fd00 ((reaper_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
3 locks held by kworker/1:2/93:
 #0: ffff88801ac78948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801ac78948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc9000213fd00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc9000213fd00 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
2 locks held by kworker/u8:6/1134:
 #0: ffff88801ac81148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801ac81148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc90003f9fd00 ((quota_release_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc90003f9fd00 ((quota_release_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
3 locks held by kworker/u8:8/3463:
 #0: ffff88801ac81148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801ac81148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc9000ca37d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc9000ca37d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:285
2 locks held by dhcpcd/5493:
 #0: ffff888056ad86c8 (nlk_cb_mutex-ROUTE){+.+.}-{4:4}, at: netlink_dump+0xcb/0xe10 net/netlink/af_netlink.c:2263
 #1: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #1: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_dumpit+0x99/0x200 net/core/rtnetlink.c:6790
2 locks held by getty/5581:
 #0: ffff888034c900a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
2 locks held by syz-executor/5823:
 #0: ffff8880248ea0e0 (&type->s_umount_key#32){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff8880248ea0e0 (&type->s_umount_key#32){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff8880248ea0e0 (&type->s_umount_key#32){++++}-{4:4}, at: deactivate_super+0xb5/0xf0 fs/super.c:505
 #1: ffffffff8e9f9e88 (lock#3){+.+.}-{4:4}, at: __lru_add_drain_all+0x66/0x560 mm/swap.c:798
1 lock held by syz-executor/5824:
 #0: ffff88807dbb40e0 (&type->s_umount_key#32){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff88807dbb40e0 (&type->s_umount_key#32){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff88807dbb40e0 (&type->s_umount_key#32){++++}-{4:4}, at: deactivate_super+0xb5/0xf0 fs/super.c:505
3 locks held by kworker/0:3/5827:
3 locks held by kworker/u9:7/5843:
 #0: ffff888054fd1948 ((wq_completion)hci8){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff888054fd1948 ((wq_completion)hci8){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc90003cbfd00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc90003cbfd00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffff88802f7e4d80 (&hdev->req_lock){+.+.}-{4:4}, at: hci_cmd_sync_work+0x1ec/0x400 net/bluetooth/hci_sync.c:331
5 locks held by kworker/0:5/5885:
2 locks held by kworker/0:7/5887:
2 locks held by syz.1.11/5944:
 #0: ffff88805af160e0 (&type->s_umount_key#49/1){+.+.}-{4:4}, at: alloc_super+0x221/0x9d0 fs/super.c:344
 #1: ffffffff8e9f9e88 (lock#3){+.+.}-{4:4}, at: __lru_add_drain_all+0x66/0x560 mm/swap.c:798
3 locks held by kworker/1:7/5953:
 #0: ffff88801ac79948 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff88801ac79948 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc900043afd00 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc900043afd00 ((reg_check_chans).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: reg_check_chans_work+0x99/0xfb0 net/wireless/reg.c:2480
3 locks held by kworker/u8:9/5985:
1 lock held by syz-executor/6030:
 #0: ffff888054d180e0 (&type->s_umount_key#32){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff888054d180e0 (&type->s_umount_key#32){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff888054d180e0 (&type->s_umount_key#32){++++}-{4:4}, at: deactivate_super+0xb5/0xf0 fs/super.c:505
1 lock held by syz.0.41/6057:
 #0: ffff8880222e4c68 (&ep->mtx){+.+.}-{4:4}, at: eventpoll_release_file+0xd3/0x280 fs/eventpoll.c:1136
1 lock held by syz-executor/6105:
 #0: ffff8880122480e0 (&type->s_umount_key#64){+.+.}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff8880122480e0 (&type->s_umount_key#64){+.+.}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff8880122480e0 (&type->s_umount_key#64){+.+.}-{4:4}, at: deactivate_super+0xb5/0xf0 fs/super.c:505
2 locks held by syz.7.60/6165:
 #0: ffff888025896b60 (&lo->lo_mutex){+.+.}-{4:4}, at: loop_set_status+0x2a/0x8f0 drivers/block/loop.c:1251
 #1: ffffffff8e9f9e88 (lock#3){+.+.}-{4:4}, at: __lru_add_drain_all+0x66/0x560 mm/swap.c:798
1 lock held by udevd/6198:
 #0: ffffffff8eeba5e8 (uuid_mutex){+.+.}-{4:4}, at: btrfs_control_ioctl+0x150/0x410 fs/btrfs/super.c:2238
2 locks held by syz.8.67/6227:
 #0: ffffffff8eeba5e8 (uuid_mutex){+.+.}-{4:4}, at: btrfs_get_tree_super fs/btrfs/super.c:1841 [inline]
 #0: ffffffff8eeba5e8 (uuid_mutex){+.+.}-{4:4}, at: btrfs_get_tree+0x309/0x1a30 fs/btrfs/super.c:2093
 #1: ffffffff8e9f9e88 (lock#3){+.+.}-{4:4}, at: __lru_add_drain_all+0x66/0x560 mm/swap.c:798
1 lock held by udevd/6262:
 #0: ffffffff8eeba5e8 (uuid_mutex){+.+.}-{4:4}, at: btrfs_control_ioctl+0x150/0x410 fs/btrfs/super.c:2238
2 locks held by syz-executor/6273:
 #0: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3b/0x1b0 drivers/net/tun.c:3517
 #1: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:297 [inline]
 #1: ffffffff8e93cff8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x381/0x830 kernel/rcu/tree_exp.h:976
3 locks held by syz-executor/6288:
 #0: ffffffff8fd15970 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8fd15828 (genl_mutex){+.+.}-{4:4}, at: genl_lock net/netlink/genetlink.c:35 [inline]
 #1: ffffffff8fd15828 (genl_mutex){+.+.}-{4:4}, at: genl_op_lock net/netlink/genetlink.c:60 [inline]
 #1: ffffffff8fd15828 (genl_mutex){+.+.}-{4:4}, at: genl_rcv_msg+0x121/0xec0 net/netlink/genetlink.c:1209
 #2: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: wg_set_device+0x102/0x2160 drivers/net/wireguard/netlink.c:504
1 lock held by syz.3.89/6304:
 #0: ffffffff8eeba5e8 (uuid_mutex){+.+.}-{4:4}, at: btrfs_get_tree_super fs/btrfs/super.c:1841 [inline]
 #0: ffffffff8eeba5e8 (uuid_mutex){+.+.}-{4:4}, at: btrfs_get_tree+0x309/0x1a30 fs/btrfs/super.c:2093
7 locks held by syz-executor/6307:
 #0: ffff888030ef2420 (sb_writers#8){.+.+}-{0:0}, at: file_start_write include/linux/fs.h:2964 [inline]
 #0: ffff888030ef2420 (sb_writers#8){.+.+}-{0:0}, at: vfs_write+0x225/0xd30 fs/read_write.c:675
 #1: ffff888043d78888 (&of->mutex){+.+.}-{4:4}, at: kernfs_fop_write_iter+0x1ea/0x500 fs/kernfs/file.c:325
 #2: ffff888144b485a8 (kn->active#49){.+.+}-{0:0}, at: kernfs_fop_write_iter+0x20e/0x500 fs/kernfs/file.c:326
 #3: ffffffff8f55e988 (nsim_bus_dev_list_lock){+.+.}-{4:4}, at: del_device_store+0xfc/0x480 drivers/net/netdevsim/bus.c:216
 #4: ffff888058f9d0e8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1014 [inline]
 #4: ffff888058f9d0e8 (&dev->mutex){....}-{4:4}, at: __device_driver_lock drivers/base/dd.c:1095 [inline]
 #4: ffff888058f9d0e8 (&dev->mutex){....}-{4:4}, at: device_release_driver_internal+0xce/0x7c0 drivers/base/dd.c:1293
 #5: ffff888058f9e250 (&devlink->lock_key){+.+.}-{4:4}, at: nsim_drv_remove+0x50/0x160 drivers/net/netdevsim/dev.c:1675
 #6: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: nsim_destroy+0x71/0x5c0 drivers/net/netdevsim/netdev.c:816
1 lock held by syz-executor/6317:
 #0: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:326 [inline]
 #0: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xce2/0x2210 net/core/rtnetlink.c:4011
1 lock held by syz-executor/6336:
 #0: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:326 [inline]
 #0: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xce2/0x2210 net/core/rtnetlink.c:4011
2 locks held by syz-executor/6338:
 #0: ffffffff8f45c540 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8f45c540 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8f45c540 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x22/0x250 net/core/rtnetlink.c:555
 #1: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #1: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:326 [inline]
 #1: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xce2/0x2210 net/core/rtnetlink.c:4011
1 lock held by syz-executor/6391:
 #0: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:326 [inline]
 #0: ffffffff8fcb2f48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xce2/0x2210 net/core/rtnetlink.c:4011

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.13.0-rc7-syzkaller-00043-g619f0b6fad52 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:234 [inline]
 watchdog+0xff6/0x1040 kernel/hung_task.c:397
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 5887 Comm: kworker/0:7 Not tainted 6.13.0-rc7-syzkaller-00043-g619f0b6fad52 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Workqueue: events_power_efficient neigh_periodic_work
RIP: 0010:deref_stack_reg arch/x86/kernel/unwind_orc.c:406 [inline]
RIP: 0010:unwind_next_frame+0xcb1/0x22d0 arch/x86/kernel/unwind_orc.c:585
Code: 0c 00 00 4c 39 f0 0f 87 74 0c 00 00 4c 89 ef e8 a5 22 00 00 49 89 c6 48 bd 00 00 00 00 00 fc ff df 48 8b 44 24 30 80 3c 28 00 <48> 8b 5c 24 18 74 08 48 89 df e8 20 28 ba 00 4c 89 33 48 8b 44 24
RSP: 0018:ffffc90000007130 EFLAGS: 00000046
RAX: 1ffff92000000e49 RBX: ffffc90000007210 RCX: 1ffff92000000e40
RDX: ffffffff90a76704 RSI: 0000000000000002 RDI: ffffc90000007dd0
RBP: dffffc0000000000 R08: 0000000000000001 R09: ffffc900000072f0
R10: dffffc0000000000 R11: ffffffff818b4af0 R12: ffffc90000008000
R13: ffffc90000007dd0 R14: ffffffff818d143a R15: 1ffff92000000e42
FS:  0000000000000000(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000000000000 CR3: 000000000e736000 CR4: 0000000000350ef0
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 arch_stack_walk+0x11c/0x150 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x118/0x1d0 kernel/stacktrace.c:122
 kasan_save_stack mm/kasan/common.c:47 [inline]
 kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
 poison_kmalloc_redzone mm/kasan/common.c:377 [inline]
 __kasan_kmalloc+0x98/0xb0 mm/kasan/common.c:394
 kasan_kmalloc include/linux/kasan.h:260 [inline]
 __kmalloc_cache_noprof+0x243/0x390 mm/slub.c:4329
 kmalloc_noprof include/linux/slab.h:901 [inline]
 dummy_urb_enqueue+0x7d/0x780 drivers/usb/gadget/udc/dummy_hcd.c:1272
 usb_hcd_submit_urb+0x36e/0x1e80 drivers/usb/core/hcd.c:1533
 ath9k_hif_usb_reg_in_cb+0x4ce/0x6e0 drivers/net/wireless/ath/ath9k/hif_usb.c:790
 __usb_hcd_giveback_urb+0x42e/0x6e0 drivers/usb/core/hcd.c:1650
 dummy_timer+0x856/0x4620 drivers/usb/gadget/udc/dummy_hcd.c:1993
 __run_hrtimer kernel/time/hrtimer.c:1739 [inline]
 __hrtimer_run_queues+0x59d/0xd30 kernel/time/hrtimer.c:1803
 hrtimer_run_softirq+0x19a/0x2c0 kernel/time/hrtimer.c:1820
 handle_softirqs+0x2d6/0x9b0 kernel/softirq.c:561
 __do_softirq kernel/softirq.c:595 [inline]
 invoke_softirq kernel/softirq.c:435 [inline]
 __irq_exit_rcu+0xf7/0x220 kernel/softirq.c:662
 irq_exit_rcu+0x9/0x30 kernel/softirq.c:678
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1049 [inline]
 sysvec_apic_timer_interrupt+0xa6/0xc0 arch/x86/kernel/apic/apic.c:1049
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:lock_is_held_type+0x13b/0x190
Code: 75 44 48 c7 04 24 00 00 00 00 9c 8f 04 24 f7 04 24 00 02 00 00 75 4c 41 f7 c4 00 02 00 00 74 01 fb 65 48 8b 04 25 28 00 00 00 <48> 3b 44 24 08 75 42 89 d8 48 83 c4 10 5b 41 5c 41 5d 41 5e 41 5f
RSP: 0018:ffffc90003b6f9d8 EFLAGS: 00000206
RAX: 87e9bcc64e570b00 RBX: 0000000000000000 RCX: ffff88802ecf3c00
RDX: 0000000000000000 RSI: ffffffff8c0aaae0 RDI: ffffffff8c5fb220
RBP: 0000000000000002 R08: ffffffff942a5947 R09: 1ffffffff2854b28
R10: dffffc0000000000 R11: fffffbfff2854b29 R12: 0000000000000246
R13: ffff88802ecf3c00 R14: 00000000ffffffff R15: ffffffff8e937b40
 lock_is_held include/linux/lockdep.h:249 [inline]
 __might_resched+0xa5/0x780 kernel/sched/core.c:8720
 neigh_periodic_work+0xbde/0xde0 net/core/neighbour.c:969
 process_one_work kernel/workqueue.c:3236 [inline]
 process_scheduled_works+0xa68/0x1840 kernel/workqueue.c:3317
 worker_thread+0x870/0xd30 kernel/workqueue.c:3398
 kthread+0x2f2/0x390 kernel/kthread.c:389
 ret_from_fork+0x4d/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (270):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/01/15 22:47 upstream 619f0b6fad52 968edaf4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/12/18 21:45 upstream c061cf420ded 1432fc84 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/12/18 06:30 upstream 59dbb9d81adf a0626d3a .config console log report syz / log [disk image] [vmlinux] [kernel image] [mounted in repro #1] [mounted in repro #2] [mounted in repro #3] [mounted in repro #4] [mounted in repro #5] [mounted in repro #6] ci2-upstream-fs INFO: task hung in __lru_add_drain_all
2024/12/14 02:41 upstream f932fb9b4074 7cbfbb3a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __lru_add_drain_all
2024/10/28 16:48 upstream 819837584309 9efb3cc7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/10/26 16:34 upstream 850925a8133c 65e8686b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/10/09 12:51 upstream 75b607fab38d 402f1df0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __lru_add_drain_all
2024/10/08 18:31 upstream 87d6aab2389e 402f1df0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/09/18 10:44 upstream a430d95c5efa c673ca06 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __lru_add_drain_all
2024/09/17 23:19 upstream 2f27fce67173 c673ca06 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/09/17 21:54 upstream 2f27fce67173 c673ca06 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/09/15 22:27 upstream d42f7708e27c 08d8a733 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __lru_add_drain_all
2024/09/12 04:13 upstream 7c6a3a65ace7 d94c83d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/09/10 18:39 upstream 8d8d276ba2fb 784df80e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/09/08 09:29 upstream d1f2d51b711a 9750182a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/08/28 06:32 upstream 3ec3f5fc4a91 6c853ff9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/08/26 13:17 upstream 5be63fc19fca 9aee4e0b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __lru_add_drain_all
2024/08/22 22:09 upstream 872cf28b8df9 295a4b50 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/08/22 00:54 upstream 872cf28b8df9 ca02180f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __lru_add_drain_all
2024/08/21 13:16 upstream b311c1b497e5 db5852f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/08/13 20:51 upstream 6b4aa469f049 f21a18ca .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __lru_add_drain_all
2024/08/11 22:06 upstream cb2e5ee8e7a0 6f4edef4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __lru_add_drain_all
2024/08/10 21:27 upstream 5189dafa4cf9 6f4edef4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/08/08 22:41 upstream cf6d429eb656 61405512 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __lru_add_drain_all
2024/08/08 09:47 upstream 6a0e38264012 de12cf65 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/08/08 03:51 upstream 6a0e38264012 7b2f2f35 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __lru_add_drain_all
2024/08/07 08:59 upstream d4560686726f e1bdb00a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __lru_add_drain_all
2024/08/06 06:16 upstream b446a2dae984 e1bdb00a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __lru_add_drain_all
2024/08/05 12:31 upstream de9c2c66ad8e e35c337f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/08/05 00:46 upstream a5dbd76a8942 1786a2a8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/08/04 15:12 upstream defaf1a2113a 1786a2a8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/08/03 15:26 upstream 17712b7ea075 1786a2a8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/08/03 12:17 upstream 17712b7ea075 1786a2a8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/08/02 18:20 upstream c0ecd6388360 53683cf2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __lru_add_drain_all
2024/08/02 12:15 upstream c0ecd6388360 1e9c4cf3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __lru_add_drain_all
2024/08/01 19:22 upstream 21b136cc63d2 1e9c4cf3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/08/01 11:09 upstream 21b136cc63d2 1e9c4cf3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __lru_add_drain_all
2024/08/01 01:52 upstream e4fc196f5ba3 1e9c4cf3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/07/31 10:55 upstream 22f546873149 6fde257d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __lru_add_drain_all
2024/07/31 07:46 upstream 22f546873149 6fde257d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/07/30 08:30 upstream 94ede2a3e913 a4e01e1e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/07/30 05:42 upstream 94ede2a3e913 5187fc86 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/07/29 11:41 upstream dc1c8034e31b 5187fc86 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/07/28 15:30 upstream 6342649c33d2 46eb10b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/07/28 09:04 upstream 5437f30d3458 46eb10b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/07/28 04:49 upstream 6342649c33d2 46eb10b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/07/27 16:51 upstream 3a7e02c040b1 46eb10b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/07/27 11:38 upstream 2f8c4f506285 46eb10b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: task hung in __lru_add_drain_all
2024/07/26 21:10 upstream 2f8c4f506285 3f86dfed .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/07/25 19:57 upstream c33ffdb70cc6 32fcf98f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/07/25 05:02 upstream c33ffdb70cc6 b24754ac .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __lru_add_drain_all
2024/05/15 01:57 upstream 1b10b390d945 fdb4c10c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __lru_add_drain_all
2024/05/13 22:20 upstream cd97950cbcab fdb4c10c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __lru_add_drain_all
2024/05/07 05:23 upstream ee5b455b0ada c035c6de .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __lru_add_drain_all
2024/09/20 15:53 upstream 2004cef11ea0 6f888b75 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in __lru_add_drain_all
2024/08/17 22:43 upstream e5fa841af679 dbc93b08 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in __lru_add_drain_all
2024/07/28 01:00 upstream 3a7e02c040b1 46eb10b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in __lru_add_drain_all
2024/07/25 17:39 upstream c33ffdb70cc6 32fcf98f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: task hung in __lru_add_drain_all
2024/07/17 17:13 linux-next 797012914d2d 03114f55 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __lru_add_drain_all
* Struck through repros no longer work on HEAD.