syzbot


INFO: task hung in jfs_flush_journal (4)

Status: upstream: reported on 2024/09/19 03:31
Subsystems: jfs
[Documentation on labels]
Reported-by: syzbot+8ab0d983d2bc3b69ea23@syzkaller.appspotmail.com
First crash: 562d, last: 21d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [jfs?] INFO: task hung in jfs_flush_journal (4) 0 (1) 2024/09/19 03:31
Similar bugs (6)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in jfs_flush_journal jfs 1 891d 891d 0/28 auto-obsoleted due to no activity on 2023/01/17 08:27
upstream INFO: task hung in jfs_flush_journal (3) jfs 4 657d 702d 0/28 auto-obsoleted due to no activity on 2023/09/08 02:21
upstream INFO: task hung in jfs_flush_journal (2) jfs 1 792d 792d 0/28 auto-obsoleted due to no activity on 2023/04/25 22:54
linux-6.1 INFO: task hung in jfs_flush_journal 1 226d 226d 0/3 auto-obsoleted due to no activity on 2024/11/22 04:48
linux-4.19 INFO: task hung in jfs_flush_journal jfs 1 803d 803d 0/1 upstream: reported on 2023/01/14 13:39
linux-5.15 INFO: task hung in jfs_flush_journal 1 671d 671d 0/3 auto-obsoleted due to no activity on 2023/09/03 18:59

Sample crash report:
INFO: task kworker/u8:1:12 blocked for more than 143 seconds.
      Not tainted 6.14.0-rc5-syzkaller-00109-g0f52fd4f67c6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:1    state:D stack:19864 pid:12    tgid:12    ppid:2      task_flags:0x4208060 flags:0x00004000
Workqueue: writeback wb_workfn (flush-7:2)
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5378 [inline]
 __schedule+0x18bc/0x4c40 kernel/sched/core.c:6765
 __schedule_loop kernel/sched/core.c:6842 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6857
 jfs_flush_journal+0x72c/0xec0 fs/jfs/jfs_logmgr.c:1564
 jfs_write_inode+0x12d/0x220 fs/jfs/inode.c:128
 write_inode fs/fs-writeback.c:1525 [inline]
 __writeback_single_inode+0x708/0x10d0 fs/fs-writeback.c:1745
 writeback_sb_inodes+0x820/0x1360 fs/fs-writeback.c:1976
 wb_writeback+0x413/0xb80 fs/fs-writeback.c:2156
 wb_do_writeback fs/fs-writeback.c:2303 [inline]
 wb_workfn+0x410/0x1080 fs/fs-writeback.c:2343
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xabe/0x18e0 kernel/workqueue.c:3319
 worker_thread+0x870/0xd30 kernel/workqueue.c:3400
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task jfsCommit:113 blocked for more than 144 seconds.
      Not tainted 6.14.0-rc5-syzkaller-00109-g0f52fd4f67c6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:jfsCommit       state:D stack:27224 pid:113   tgid:113   ppid:2      task_flags:0x200040 flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5378 [inline]
 __schedule+0x18bc/0x4c40 kernel/sched/core.c:6765
 __schedule_loop kernel/sched/core.c:6842 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6857
 io_schedule+0x8d/0x110 kernel/sched/core.c:7690
 __lock_metapage fs/jfs/jfs_metapage.c:51 [inline]
 lock_metapage+0x26a/0x450 fs/jfs/jfs_metapage.c:65
 __get_metapage+0x57c/0xdc0 fs/jfs/jfs_metapage.c:640
 diIAGRead+0xcb/0x140 fs/jfs/jfs_imap.c:2672
 diFree+0xa7e/0x2fb0 fs/jfs/jfs_imap.c:959
 jfs_evict_inode+0x32d/0x440 fs/jfs/inode.c:156
 evict+0x4e8/0x9a0 fs/inode.c:796
 txUpdateMap+0x931/0xb10 fs/jfs/jfs_txnmgr.c:2367
 txLazyCommit fs/jfs/jfs_txnmgr.c:2664 [inline]
 jfs_lazycommit+0x49a/0xb80 fs/jfs/jfs_txnmgr.c:2733
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: task syz.2.227:7974 blocked for more than 144 seconds.
      Not tainted 6.14.0-rc5-syzkaller-00109-g0f52fd4f67c6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.227       state:D stack:22808 pid:7974  tgid:7973  ppid:5827   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5378 [inline]
 __schedule+0x18bc/0x4c40 kernel/sched/core.c:6765
 __schedule_loop kernel/sched/core.c:6842 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6857
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6914
 __mutex_lock_common kernel/locking/mutex.c:662 [inline]
 __mutex_lock+0x817/0x1010 kernel/locking/mutex.c:730
 diAlloc+0x75a/0x1630 fs/jfs/jfs_imap.c:1385
 ialloc+0x8f/0x8c0 fs/jfs/jfs_inode.c:56
 jfs_create+0x1be/0xbb0 fs/jfs/namei.c:92
 lookup_open fs/namei.c:3651 [inline]
 open_last_lookups fs/namei.c:3750 [inline]
 path_openat+0x193c/0x3590 fs/namei.c:3986
 do_filp_open+0x27f/0x4e0 fs/namei.c:4016
 do_sys_openat2+0x13e/0x1d0 fs/open.c:1428
 do_sys_open fs/open.c:1443 [inline]
 __do_sys_open fs/open.c:1451 [inline]
 __se_sys_open fs/open.c:1447 [inline]
 __x64_sys_open+0x225/0x270 fs/open.c:1447
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fdc6738d169
RSP: 002b:00007fdc68245038 EFLAGS: 00000246 ORIG_RAX: 0000000000000002
RAX: ffffffffffffffda RBX: 00007fdc675a5fa0 RCX: 00007fdc6738d169
RDX: 0000000000000003 RSI: 000000000014907e RDI: 00004000000087c0
RBP: 00007fdc6740e2a0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fdc675a5fa0 R15: 00007ffcdc69b7c8
 </TASK>
INFO: task syz.2.227:7992 blocked for more than 144 seconds.
      Not tainted 6.14.0-rc5-syzkaller-00109-g0f52fd4f67c6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.227       state:D stack:27552 pid:7992  tgid:7973  ppid:5827   task_flags:0x400040 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5378 [inline]
 __schedule+0x18bc/0x4c40 kernel/sched/core.c:6765
 __schedule_loop kernel/sched/core.c:6842 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6857
 wb_wait_for_completion+0x166/0x290 fs/fs-writeback.c:216
 sync_inodes_sb+0x28d/0xb50 fs/fs-writeback.c:2821
 iterate_supers+0xc6/0x190 fs/super.c:934
 ksys_sync+0xbd/0x1c0 fs/sync.c:102
 __do_sys_sync+0xe/0x20 fs/sync.c:113
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fdc6738d169
RSP: 002b:00007fdc68224038 EFLAGS: 00000246 ORIG_RAX: 00000000000000a2
RAX: ffffffffffffffda RBX: 00007fdc675a6080 RCX: 00007fdc6738d169
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: 00007fdc675a6080 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000001 R14: 00007fdc675a6080 R15: 00007ffcdc69b7c8
 </TASK>
INFO: task syz.2.227:8000 blocked for more than 145 seconds.
      Not tainted 6.14.0-rc5-syzkaller-00109-g0f52fd4f67c6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.227       state:D stack:26528 pid:8000  tgid:7973  ppid:5827   task_flags:0x400040 flags:0x00000004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5378 [inline]
 __schedule+0x18bc/0x4c40 kernel/sched/core.c:6765
 __schedule_loop kernel/sched/core.c:6842 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6857
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6914
 rwsem_down_read_slowpath kernel/locking/rwsem.c:1084 [inline]
 __down_read_common kernel/locking/rwsem.c:1248 [inline]
 __down_read kernel/locking/rwsem.c:1261 [inline]
 down_read+0x705/0xa40 kernel/locking/rwsem.c:1526
 inode_lock_shared include/linux/fs.h:887 [inline]
 lookup_slow+0x45/0x70 fs/namei.c:1809
 walk_component fs/namei.c:2114 [inline]
 link_path_walk+0x99b/0xea0 fs/namei.c:2479
 path_openat+0x266/0x3590 fs/namei.c:3985
 do_filp_open+0x27f/0x4e0 fs/namei.c:4016
 do_sys_openat2+0x13e/0x1d0 fs/open.c:1428
 do_sys_open fs/open.c:1443 [inline]
 __do_sys_openat fs/open.c:1459 [inline]
 __se_sys_openat fs/open.c:1454 [inline]
 __x64_sys_openat+0x247/0x2a0 fs/open.c:1454
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fdc6738d169
RSP: 002b:00007fdc68203038 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007fdc675a6160 RCX: 00007fdc6738d169
RDX: 0000000000000802 RSI: 0000400000000200 RDI: ffffffffffffff9c
RBP: 00007fdc6740e2a0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fdc675a6160 R15: 00007ffcdc69b7c8
 </TASK>

Showing all locks held in the system:
2 locks held by kworker/u8:1/12:
 #0: ffff88801e289148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801e289148 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90000117c60 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90000117c60 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
1 lock held by khungtaskd/30:
 #0: ffffffff8eb392e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8eb392e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8eb392e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6746
3 locks held by kworker/u8:3/51:
2 locks held by jfsCommit/113:
 #0: ffff888077318920 (&(imap->im_aglock[index])){+.+.}-{4:4}, at: diFree+0x37c/0x2fb0 fs/jfs/jfs_imap.c:889
 #1: ffff8880508a4af8 (&jfs_ip->rdwrlock/1){++++}-{4:4}, at: diFree+0x398/0x2fb0 fs/jfs/jfs_imap.c:894
3 locks held by kworker/u8:5/965:
 #0: ffff88801b089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801b089148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc90003997c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90003997c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fec3788 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:285
3 locks held by kworker/u8:7/2971:
 #0: ffff88814e0cb148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88814e0cb148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc9000bcb7c60 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000bcb7c60 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8fec3788 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:129 [inline]
 #2: ffffffff8fec3788 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0x10e/0x16a0 net/ipv6/addrconf.c:4190
5 locks held by kworker/u8:8/3519:
 #0: ffff88801bef3148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801bef3148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x98b/0x18e0 kernel/workqueue.c:3319
 #1: ffffc9000cda7c60 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9000cda7c60 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9c6/0x18e0 kernel/workqueue.c:3319
 #2: ffffffff8feb6f50 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x17a/0xd60 net/core/net_namespace.c:606
 #3: ffffffff8fec3788 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0xdc/0x880 net/core/dev.c:12417
 #4: ffffffff8eb3e7b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:302 [inline]
 #4: ffffffff8eb3e7b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x381/0x820 kernel/rcu/tree_exp.h:996
2 locks held by dhcpcd/5490:
 #0: ffffffff8fea86a8 (vlan_ioctl_mutex){+.+.}-{4:4}, at: sock_ioctl+0x661/0x8e0 net/socket.c:1280
 #1: ffffffff8fec3788 (rtnl_mutex){+.+.}-{4:4}, at: vlan_ioctl_handler+0x112/0x9d0 net/8021q/vlan.c:554
2 locks held by getty/5584:
 #0: ffff8880359520a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002fde2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x616/0x1770 drivers/tty/n_tty.c:2211
3 locks held by kworker/1:6/5875:
3 locks held by syz.2.227/7974:
 #0: ffff888060792420 (sb_writers#27){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:547
 #1: ffff8880508a17e8 (&type->i_mutex_dir_key#14){++++}-{4:4}, at: inode_lock include/linux/fs.h:877 [inline]
 #1: ffff8880508a17e8 (&type->i_mutex_dir_key#14){++++}-{4:4}, at: open_last_lookups fs/namei.c:3747 [inline]
 #1: ffff8880508a17e8 (&type->i_mutex_dir_key#14){++++}-{4:4}, at: path_openat+0x89a/0x3590 fs/namei.c:3986
 #2: ffff888077318920 (&(imap->im_aglock[index])){+.+.}-{4:4}, at: diAlloc+0x75a/0x1630 fs/jfs/jfs_imap.c:1385
2 locks held by syz.2.227/7992:
 #0: ffff8880607920e0 (&type->s_umount_key#80){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880607920e0 (&type->s_umount_key#80){++++}-{4:4}, at: super_lock+0x27c/0x400 fs/super.c:120
 #1: ffff88802555e7d0 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:387 [inline]
 #1: ffff88802555e7d0 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: sync_inodes_sb+0x26e/0xb50 fs/fs-writeback.c:2819
1 lock held by syz.2.227/8000:
 #0: ffff8880508a17e8 (&type->i_mutex_dir_key#14){++++}-{4:4}, at: inode_lock_shared include/linux/fs.h:887 [inline]
 #0: ffff8880508a17e8 (&type->i_mutex_dir_key#14){++++}-{4:4}, at: lookup_slow+0x45/0x70 fs/namei.c:1809
2 locks held by syz.3.329/9452:
 #0: ffff8880607920e0 (&type->s_umount_key#80){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880607920e0 (&type->s_umount_key#80){++++}-{4:4}, at: super_lock+0x27c/0x400 fs/super.c:120
 #1: ffff88802555e7d0 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:387 [inline]
 #1: ffff88802555e7d0 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: sync_inodes_sb+0x26e/0xb50 fs/fs-writeback.c:2819
2 locks held by syz.3.329/9500:
 #0: ffff8880607920e0 (&type->s_umount_key#80){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880607920e0 (&type->s_umount_key#80){++++}-{4:4}, at: super_lock+0x27c/0x400 fs/super.c:120
 #1: ffff88802555e7d0 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:387 [inline]
 #1: ffff88802555e7d0 (&bdi->wb_switch_rwsem){+.+.}-{4:4}, at: sync_inodes_sb+0x26e/0xb50 fs/fs-writeback.c:2819
1 lock held by syz-executor/10492:
 #0: ffff88802a80e0e0 (&type->s_umount_key#80){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff88802a80e0e0 (&type->s_umount_key#80){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff88802a80e0e0 (&type->s_umount_key#80){++++}-{4:4}, at: deactivate_super+0xb5/0xf0 fs/super.c:505
4 locks held by kworker/u8:10/10838:
 #0: ffff8880b873e7d8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:598
 #1: ffff8880b8728948 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x41d/0x7a0 kernel/sched/psi.c:987
 #2: ffff88805a980768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: class_wiphy_constructor include/net/cfg80211.h:6061 [inline]
 #2: ffff88805a980768 (&rdev->wiphy.mtx){+.+.}-{4:4}, at: cfg80211_wiphy_work+0xcf/0x490 net/wireless/core.c:421
 #3: ffffffff8eb392e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #3: ffffffff8eb392e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #3: ffffffff8eb392e0 (rcu_read_lock){....}-{1:3}, at: ieee80211_sta_active_ibss+0xc7/0x330 net/mac80211/ibss.c:643
2 locks held by syz-executor/10884:
 #0: ffffffff8f639fc0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8f639fc0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8f639fc0 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x22/0x250 net/core/rtnetlink.c:564
 #1: ffffffff8fec3788 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #1: ffffffff8fec3788 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:335 [inline]
 #1: ffffffff8fec3788 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xc55/0x1d30 net/core/rtnetlink.c:4021

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.14.0-rc5-syzkaller-00109-g0f52fd4f67c6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline]
 watchdog+0x1058/0x10a0 kernel/hung_task.c:399
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 10838 Comm: kworker/u8:10 Not tainted 6.14.0-rc5-syzkaller-00109-g0f52fd4f67c6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025
Workqueue: bat_events batadv_iv_send_outstanding_bat_ogm_packet
RIP: 0010:skb_assert_len include/linux/skbuff.h:2688 [inline]
RIP: 0010:__dev_queue_xmit+0x221/0x3f50 net/core/dev.c:4566
Code: 66 44 89 ab ba 00 00 00 48 83 c3 70 48 89 d8 48 c1 e8 03 48 89 44 24 40 42 0f b6 04 30 84 c0 0f 85 c7 22 00 00 48 89 5c 24 70 <8b> 1b 31 ff 89 de e8 84 0d ff f7 85 db 74 07 e8 3b 09 ff f7 eb 24
RSP: 0018:ffffc90003edf6e0 EFLAGS: 00000246
RAX: 0000000000000000 RBX: ffff88805e265570 RCX: ffff88802ed55a00
RDX: 0000000000000000 RSI: 0000000000000040 RDI: ffff88805e2655ba
RBP: ffffc90003edf9d8 R08: ffffffff89c2c49b R09: 1ffff1100ceeeafb
R10: dffffc0000000000 R11: ffffed100ceeeafc R12: 1ffff1100bc4caa2
R13: 0000000000000040 R14: dffffc0000000000 R15: ffff88805e265510
FS:  0000000000000000(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fb68aedd9b8 CR3: 000000000e938000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 dev_queue_xmit include/linux/netdevice.h:3313 [inline]
 batadv_send_skb_packet+0x42b/0x690 net/batman-adv/send.c:108
 batadv_iv_ogm_send_to_if net/batman-adv/bat_iv_ogm.c:392 [inline]
 batadv_iv_ogm_emit net/batman-adv/bat_iv_ogm.c:420 [inline]
 batadv_iv_send_outstanding_bat_ogm_packet+0x673/0x810 net/batman-adv/bat_iv_ogm.c:1700
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xabe/0x18e0 kernel/workqueue.c:3319
 worker_thread+0x870/0xd30 kernel/workqueue.c:3400
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (41):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/03/07 02:58 upstream 0f52fd4f67c6 831e3629 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2025/02/27 04:56 upstream 5394eea10651 6a8fcbc4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2025/02/25 08:18 upstream d082ecbc71e9 d34966d1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2025/02/17 20:49 upstream 0ad2507d5d93 4121cf9d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2025/02/12 22:06 upstream 09fbf3d50205 b27c2402 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/12/29 15:34 upstream 059dd502b263 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/12/21 13:45 upstream 499551201b5f d7f584ee .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/11/29 23:02 upstream 509f806f7f70 5df23865 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/10/07 18:26 upstream 8cf0b93919e1 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/09/24 23:12 upstream 97d8894b6f4c 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/09/15 03:12 upstream 0babf683783d 08d8a733 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/09/11 21:49 upstream 7c6a3a65ace7 d94c83d8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/08/26 12:16 upstream 5be63fc19fca d7d32352 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/08/24 08:27 upstream 60f0560f53e3 d7d32352 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/08/23 03:07 upstream aa0743a22936 ce8a9099 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/08/15 00:36 upstream 9d5906799f7d e4bacdaf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/08/13 12:32 upstream d74da846046a f21a18ca .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/08/05 14:20 upstream de9c2c66ad8e e35c337f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/07/04 16:00 upstream 795c58e4c7fc dc6bbff0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/06/23 03:42 upstream 5f583a3162ff edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in jfs_flush_journal
2024/06/22 23:22 upstream 35bb670d65fc edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in jfs_flush_journal
2024/06/09 15:59 upstream 771ed66105de 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/06/05 02:11 upstream 32f88d65f01b e1e2c66e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in jfs_flush_journal
2024/05/22 14:47 upstream 8f6a15f095a6 4d098039 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in jfs_flush_journal
2024/05/11 17:33 upstream cf87f46fd34d 9026e142 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/05/09 02:01 upstream 6d7ddd805123 20bf80e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/05/03 20:57 upstream f03359bca01b dd26401e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/04/29 12:21 upstream e67572cd2204 27e33c58 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/04/27 03:52 upstream 5eb4573ea63d 07b455f9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/04/25 11:10 upstream e88c4cfcb7b8 8bdc0f22 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/02/07 17:40 upstream 6d280f4d760e 6404acf9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/02/05 20:50 upstream 54be6c6c5ae8 e23e8c20 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/01/23 07:44 upstream 5d9248eed480 1c0ecc51 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2023/12/12 00:02 upstream a39b6ac3781d 28b24332 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2023/12/07 11:57 upstream bee0e7762ad2 0a02ce36 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2023/10/02 18:44 upstream 8a749fd1a872 50b20e75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2023/09/13 02:15 upstream a747acc0b752 59da8366 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in jfs_flush_journal
2024/12/02 08:03 linux-next f486c8aa16b8 68914665 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/06/22 22:47 linux-next f76698bd9a8c edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/06/05 21:21 linux-next 234cb065ad82 121701b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/04/14 11:14 linux-next 9ed46da14b9b c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
* Struck through repros no longer work on HEAD.