syzbot


INFO: task hung in jfs_flush_journal (4)

Status: upstream: reported on 2024/09/19 03:31
Subsystems: jfs
[Documentation on labels]
Reported-by: syzbot+8ab0d983d2bc3b69ea23@syzkaller.appspotmail.com
First crash: 522d, last: 3d14h
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [jfs?] INFO: task hung in jfs_flush_journal (4) 0 (1) 2024/09/19 03:31
Similar bugs (6)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in jfs_flush_journal jfs 1 851d 851d 0/28 auto-obsoleted due to no activity on 2023/01/17 08:27
upstream INFO: task hung in jfs_flush_journal (3) jfs 4 617d 662d 0/28 auto-obsoleted due to no activity on 2023/09/08 02:21
upstream INFO: task hung in jfs_flush_journal (2) jfs 1 752d 752d 0/28 auto-obsoleted due to no activity on 2023/04/25 22:54
linux-6.1 INFO: task hung in jfs_flush_journal 1 186d 186d 0/3 auto-obsoleted due to no activity on 2024/11/22 04:48
linux-4.19 INFO: task hung in jfs_flush_journal jfs 1 763d 763d 0/1 upstream: reported on 2023/01/14 13:39
linux-5.15 INFO: task hung in jfs_flush_journal 1 631d 631d 0/3 auto-obsoleted due to no activity on 2023/09/03 18:59

Sample crash report:
INFO: task syz.3.191:7223 blocked for more than 143 seconds.
      Not tainted 6.14.0-rc2-syzkaller-00039-g09fbf3d50205 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.191       state:D stack:26240 pid:7223  tgid:7193  ppid:5833   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5377 [inline]
 __schedule+0x18bc/0x4c40 kernel/sched/core.c:6764
 __schedule_loop kernel/sched/core.c:6841 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6856
 jfs_flush_journal+0x72c/0xec0 fs/jfs/jfs_logmgr.c:1564
 jfs_sync_fs+0x80/0xa0 fs/jfs/super.c:649
 sync_filesystem+0x1c8/0x230 fs/sync.c:66
 jfs_reconfigure+0xd6/0x9d0 fs/jfs/super.c:370
 reconfigure_super+0x43a/0x870 fs/super.c:1083
 vfs_cmd_reconfigure fs/fsopen.c:262 [inline]
 vfs_fsconfig_locked fs/fsopen.c:291 [inline]
 __do_sys_fsconfig fs/fsopen.c:467 [inline]
 __se_sys_fsconfig+0xb74/0xf60 fs/fsopen.c:344
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb37258cde9
RSP: 002b:00007fb3703d5038 EFLAGS: 00000246 ORIG_RAX: 00000000000001af
RAX: ffffffffffffffda RBX: 00007fb3727a6160 RCX: 00007fb37258cde9
RDX: 0000000000000000 RSI: 0000000000000007 RDI: 0000000000000004
RBP: 00007fb37260e2a0 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fb3727a6160 R15: 00007fff5bb59998
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/30:
 #0: ffffffff8e9387e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e9387e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e9387e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6746
2 locks held by kworker/u8:3/55:
3 locks held by kworker/u8:9/3558:
1 lock held by klogd/5187:
2 locks held by getty/5579:
 #0: ffff8880318970a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
2 locks held by udevd/6618:
2 locks held by syz.3.191/7223:
 #0: ffff88803285ec70 (&fc->uapi_mutex){+.+.}-{4:4}, at: __do_sys_fsconfig fs/fsopen.c:465 [inline]
 #0: ffff88803285ec70 (&fc->uapi_mutex){+.+.}-{4:4}, at: __se_sys_fsconfig+0x9b2/0xf60 fs/fsopen.c:344
 #1: ffff888029d080e0 (&type->s_umount_key#98){++++}-{4:4}, at: vfs_cmd_reconfigure fs/fsopen.c:261 [inline]
 #1: ffff888029d080e0 (&type->s_umount_key#98){++++}-{4:4}, at: vfs_fsconfig_locked fs/fsopen.c:291 [inline]
 #1: ffff888029d080e0 (&type->s_umount_key#98){++++}-{4:4}, at: __do_sys_fsconfig fs/fsopen.c:467 [inline]
 #1: ffff888029d080e0 (&type->s_umount_key#98){++++}-{4:4}, at: __se_sys_fsconfig+0xb6a/0xf60 fs/fsopen.c:344
2 locks held by syz.0.263/7783:
 #0: ffff888079920420 (sb_writers#3){.+.+}-{0:0}, at: direct_splice_actor+0x49/0x220 fs/splice.c:1163
 #1: ffff888029d080e0 (&type->s_umount_key#98){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #1: ffff888029d080e0 (&type->s_umount_key#98){++++}-{4:4}, at: super_lock+0x27c/0x400 fs/super.c:120
1 lock held by syz.5.297/8115:
 #0: ffff888029d080e0 (&type->s_umount_key#98){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888029d080e0 (&type->s_umount_key#98){++++}-{4:4}, at: super_lock+0x27c/0x400 fs/super.c:120
2 locks held by syz.4.318/8507:
 #0: ffff88807e6dc420 (sb_writers#3){.+.+}-{0:0}, at: direct_splice_actor+0x49/0x220 fs/splice.c:1163
 #1: ffff888029d080e0 (&type->s_umount_key#98){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #1: ffff888029d080e0 (&type->s_umount_key#98){++++}-{4:4}, at: super_lock+0x27c/0x400 fs/super.c:120
3 locks held by syz.9.413/9421:
4 locks held by syz.1.412/9429:
2 locks held by syz.8.415/9431:
 #0: ffff888059a840e0 (&type->s_umount_key#77/1){+.+.}-{4:4}, at: alloc_super+0x221/0x9d0 fs/super.c:344
 #1: ffff8880b873e7d8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:598
2 locks held by dhcpcd-run-hook/9471:

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.14.0-rc2-syzkaller-00039-g09fbf3d50205 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline]
 watchdog+0x1058/0x10a0 kernel/hung_task.c:399
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 3524 Comm: kworker/u8:8 Not tainted 6.14.0-rc2-syzkaller-00039-g09fbf3d50205 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Workqueue: btrfs-endio-meta btrfs_end_bio_work
RIP: 0010:orc_ip arch/x86/kernel/unwind_orc.c:80 [inline]
RIP: 0010:__orc_find arch/x86/kernel/unwind_orc.c:102 [inline]
RIP: 0010:orc_find arch/x86/kernel/unwind_orc.c:227 [inline]
RIP: 0010:unwind_next_frame+0x6d5/0x22d0 arch/x86/kernel/unwind_orc.c:494
Code: 89 c1 48 c1 f9 02 48 c1 e8 3f 48 01 c8 48 83 e0 fe 49 8d 1c 46 48 89 d8 48 c1 e8 03 48 b9 00 00 00 00 00 fc ff df 0f b6 04 08 <84> c0 75 27 48 63 03 48 01 d8 48 8d 4b 04 4c 39 f8 4c 0f 46 f1 48
RSP: 0018:ffffc90000a18510 EFLAGS: 00000a03
RAX: 0000000000000000 RBX: ffffffff902c67c4 RCX: dffffc0000000000
RDX: 00000000000af1ef RSI: ffffffff90a2e74c RDI: 0000000000000001
RBP: ffffffff902c67d0 R08: 0000000000000007 R09: 0000000000000000
R10: ffffc90000a18690 R11: fffff520001430d4 R12: ffffffff902c67b8
R13: ffffffff902c67b8 R14: ffffffff902c67b8 R15: ffffffff81620d64
FS:  0000000000000000(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f04cbe1e000 CR3: 000000004b768000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <IRQ>
 __unwind_start+0x59a/0x740 arch/x86/kernel/unwind_orc.c:760
 unwind_start arch/x86/include/asm/unwind.h:64 [inline]
 arch_stack_walk+0xe5/0x150 arch/x86/kernel/stacktrace.c:24
 stack_trace_save+0x118/0x1d0 kernel/stacktrace.c:122
 kasan_save_stack mm/kasan/common.c:47 [inline]
 kasan_save_track+0x3f/0x80 mm/kasan/common.c:68
 kasan_save_free_info+0x40/0x50 mm/kasan/generic.c:576
 poison_slab_object mm/kasan/common.c:247 [inline]
 __kasan_slab_free+0x59/0x70 mm/kasan/common.c:264
 kasan_slab_free include/linux/kasan.h:233 [inline]
 slab_free_hook mm/slub.c:2353 [inline]
 slab_free mm/slub.c:4609 [inline]
 kmem_cache_free+0x195/0x410 mm/slub.c:4711
 skb_kfree_head net/core/skbuff.c:1084 [inline]
 skb_free_head net/core/skbuff.c:1098 [inline]
 skb_release_data+0x677/0x8a0 net/core/skbuff.c:1125
 skb_release_all net/core/skbuff.c:1190 [inline]
 __kfree_skb net/core/skbuff.c:1204 [inline]
 consume_skb+0x9f/0xf0 net/core/skbuff.c:1436
 mac80211_hwsim_beacon_tx+0x3bf/0x850 drivers/net/wireless/virtual/mac80211_hwsim.c:2315
 __iterate_interfaces+0x297/0x570 net/mac80211/util.c:760
 ieee80211_iterate_active_interfaces_atomic+0xd8/0x170 net/mac80211/util.c:796
 mac80211_hwsim_beacon+0xd4/0x1f0 drivers/net/wireless/virtual/mac80211_hwsim.c:2345
 __run_hrtimer kernel/time/hrtimer.c:1801 [inline]
 __hrtimer_run_queues+0x59b/0xd30 kernel/time/hrtimer.c:1865
 hrtimer_run_softirq+0x19a/0x2c0 kernel/time/hrtimer.c:1882
 handle_softirqs+0x2d4/0x9b0 kernel/softirq.c:561
 __do_softirq kernel/softirq.c:595 [inline]
 invoke_softirq kernel/softirq.c:435 [inline]
 __irq_exit_rcu+0xf7/0x220 kernel/softirq.c:662
 irq_exit_rcu+0x9/0x30 kernel/softirq.c:678
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1049 [inline]
 sysvec_apic_timer_interrupt+0xa6/0xc0 arch/x86/kernel/apic/apic.c:1049
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:console_trylock_spinning kernel/printk/printk.c:2061 [inline]
RIP: 0010:vprintk_emit+0x700/0xa10 kernel/printk/printk.c:2431
Code: 00 e8 94 d4 20 00 4c 8d bc 24 a0 00 00 00 4d 85 e4 75 07 e8 82 d4 20 00 eb 06 e8 7b d4 20 00 fb 49 bc 00 00 00 00 00 fc ff df <48> c7 c7 80 43 81 8e 31 f6 ba 01 00 00 00 31 c9 41 b8 01 00 00 00
RSP: 0018:ffffc9000c977480 EFLAGS: 00000293
RAX: ffffffff819e7ac5 RBX: 0000000000000000 RCX: ffff888032ba5a00
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc9000c977590 R08: ffffffff819e7a9e R09: 1ffffffff2858d2d
R10: dffffc0000000000 R11: fffffbfff2858d2e R12: dffffc0000000000
R13: 1ffff9200192ee94 R14: ffffffff819e7900 R15: ffffc9000c977520
 _printk+0xd5/0x120 kernel/printk/printk.c:2457
 _btrfs_printk+0x5ae/0x5e0 fs/btrfs/messages.c:248
 btrfs_validate_extent_buffer+0x708/0x1230 fs/btrfs/disk-io.c:402
 end_bbio_meta_read+0x1fa/0x470 fs/btrfs/extent_io.c:3514
 process_one_work kernel/workqueue.c:3236 [inline]
 process_scheduled_works+0xa66/0x1840 kernel/workqueue.c:3317
 worker_thread+0x870/0xd30 kernel/workqueue.c:3398
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (37):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/02/12 22:06 upstream 09fbf3d50205 b27c2402 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/12/29 15:34 upstream 059dd502b263 d3ccff63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/12/21 13:45 upstream 499551201b5f d7f584ee .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/11/29 23:02 upstream 509f806f7f70 5df23865 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/10/07 18:26 upstream 8cf0b93919e1 d7906eff .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/09/24 23:12 upstream 97d8894b6f4c 5643e0e9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/09/15 03:12 upstream 0babf683783d 08d8a733 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/09/11 21:49 upstream 7c6a3a65ace7 d94c83d8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/08/26 12:16 upstream 5be63fc19fca d7d32352 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/08/24 08:27 upstream 60f0560f53e3 d7d32352 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/08/23 03:07 upstream aa0743a22936 ce8a9099 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/08/15 00:36 upstream 9d5906799f7d e4bacdaf .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/08/13 12:32 upstream d74da846046a f21a18ca .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/08/05 14:20 upstream de9c2c66ad8e e35c337f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/07/04 16:00 upstream 795c58e4c7fc dc6bbff0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/06/23 03:42 upstream 5f583a3162ff edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in jfs_flush_journal
2024/06/22 23:22 upstream 35bb670d65fc edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in jfs_flush_journal
2024/06/09 15:59 upstream 771ed66105de 82c05ab8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/06/05 02:11 upstream 32f88d65f01b e1e2c66e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in jfs_flush_journal
2024/05/22 14:47 upstream 8f6a15f095a6 4d098039 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in jfs_flush_journal
2024/05/11 17:33 upstream cf87f46fd34d 9026e142 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/05/09 02:01 upstream 6d7ddd805123 20bf80e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/05/03 20:57 upstream f03359bca01b dd26401e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/04/29 12:21 upstream e67572cd2204 27e33c58 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/04/27 03:52 upstream 5eb4573ea63d 07b455f9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/04/25 11:10 upstream e88c4cfcb7b8 8bdc0f22 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/02/07 17:40 upstream 6d280f4d760e 6404acf9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/02/05 20:50 upstream 54be6c6c5ae8 e23e8c20 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2024/01/23 07:44 upstream 5d9248eed480 1c0ecc51 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2023/12/12 00:02 upstream a39b6ac3781d 28b24332 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2023/12/07 11:57 upstream bee0e7762ad2 0a02ce36 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2023/10/02 18:44 upstream 8a749fd1a872 50b20e75 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2023/09/13 02:15 upstream a747acc0b752 59da8366 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in jfs_flush_journal
2024/12/02 08:03 linux-next f486c8aa16b8 68914665 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/06/22 22:47 linux-next f76698bd9a8c edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/06/05 21:21 linux-next 234cb065ad82 121701b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2024/04/14 11:14 linux-next 9ed46da14b9b c8349e48 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
* Struck through repros no longer work on HEAD.