syzbot


INFO: task hung in f2fs_balance_fs

Status: auto-obsoleted due to no activity on 2023/08/22 15:19
Reported-by: syzbot+8bb6c95aba352a8b0f0e@syzkaller.appspotmail.com
First crash: 414d, last: 351d
Similar bugs (6)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in f2fs_balance_fs f2fs C 199 415d 554d 0/1 upstream: reported C repro on 2022/10/18 13:55
linux-6.1 INFO: task hung in f2fs_balance_fs (2) 2 149d 171d 0/3 auto-obsoleted due to no activity on 2024/03/06 18:04
linux-6.1 INFO: task hung in f2fs_balance_fs (3) 15 9h29m 29d 0/3 upstream: reported on 2024/03/27 06:53
linux-4.14 INFO: task hung in f2fs_balance_fs 3 524d 550d 0/1 auto-obsoleted due to no activity on 2023/03/18 03:56
upstream INFO: task hung in f2fs_balance_fs f2fs C done inconclusive 363 4h35m 407d 0/26 upstream: reported C repro on 2023/03/15 03:28
linux-5.15 INFO: task hung in f2fs_balance_fs origin:upstream C 44 9h37m 413d 0/3 upstream: reported C repro on 2023/03/08 13:34

Sample crash report:
INFO: task kworker/u4:4:3680 blocked for more than 143 seconds.
      Not tainted 6.1.27-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:4    state:D stack:25464 pid:3680  ppid:2      flags:0x00004000
Workqueue: writeback wb_workfn (flush-7:3)
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5241 [inline]
 __schedule+0x132c/0x4330 kernel/sched/core.c:6554
 schedule+0xbf/0x180 kernel/sched/core.c:6630
 rwsem_down_write_slowpath+0xe93/0x14a0 kernel/locking/rwsem.c:1189
 f2fs_down_write fs/f2fs/f2fs.h:2207 [inline]
 f2fs_balance_fs+0x4fb/0x6c0 fs/f2fs/segment.c:418
 f2fs_write_inode+0x4c3/0x540 fs/f2fs/inode.c:757
 write_inode fs/fs-writeback.c:1443 [inline]
 __writeback_single_inode+0x67d/0x11e0 fs/fs-writeback.c:1655
 writeback_sb_inodes+0xc21/0x1ac0 fs/fs-writeback.c:1881
 __writeback_inodes_wb+0x114/0x400 fs/fs-writeback.c:1952
 wb_writeback+0x4b1/0xe10 fs/fs-writeback.c:2057
 wb_check_old_data_flush fs/fs-writeback.c:2157 [inline]
 wb_do_writeback fs/fs-writeback.c:2210 [inline]
 wb_workfn+0xbec/0x1020 fs/fs-writeback.c:2238
 process_one_work+0x8aa/0x11f0 kernel/workqueue.c:2289
 worker_thread+0xa5f/0x1210 kernel/workqueue.c:2436
 kthread+0x26e/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/12:
 #0: ffffffff8cf273f0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by rcu_tasks_trace/13:
 #0: ffffffff8cf27bf0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:510
1 lock held by khungtaskd/28:
 #0: ffffffff8cf27220 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
1 lock held by hwrng/769:
 #0: ffffffff8d6731e8 (reading_mutex){+.+.}-{3:3}, at: hwrng_fillfn+0xe2/0x3b0 drivers/char/hw_random/core.c:506
1 lock held by udevd/3000:
2 locks held by getty/3301:
 #0: ffff88814bc15098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x21/0x70 drivers/tty/tty_ldisc.c:244
 #1: ffffc900031262f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6a7/0x1db0 drivers/tty/n_tty.c:2177
5 locks held by syz-executor.5/3586:
2 locks held by kworker/u4:19/5532:
 #0: ffff888012469138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc9001624fd20 (connector_reaper_work){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
3 locks held by kworker/u4:21/5536:
1 lock held by syz-executor.1/15722:
 #0: ffffffff8cf2c7f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:324 [inline]
 #0: ffffffff8cf2c7f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x479/0x8a0 kernel/rcu/tree_exp.h:948
2 locks held by kworker/u4:5/28961:
 #0: ffff888012469138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc90005467d20 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
2 locks held by kworker/0:3/1610:
 #0: ffff888012466538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc9000a93fd20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
3 locks held by kworker/0:4/1667:
 #0: ffff888012464d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc9000b587d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
 #2: ffffffff8cf2c7f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:292 [inline]
 #2: ffffffff8cf2c7f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3b0/0x8a0 kernel/rcu/tree_exp.h:948
3 locks held by syz-executor.3/3354:
6 locks held by kworker/1:1/3679:
 #0: ffff8880172d8538 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: ffffc9000560fd20 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
 #2: ffff888145b68190 (&dev->mutex){....}-{3:3}, at: device_lock include/linux/device.h:836 [inline]
 #2: ffff888145b68190 (&dev->mutex){....}-{3:3}, at: hub_event+0x20e/0x57b0 drivers/usb/core/hub.c:5683
 #3: ffff888145b6b4f8 (&port_dev->status_lock){+.+.}-{3:3}, at: usb_lock_port drivers/usb/core/hub.c:3105 [inline]
 #3: ffff888145b6b4f8 (&port_dev->status_lock){+.+.}-{3:3}, at: hub_port_connect drivers/usb/core/hub.c:5251 [inline]
 #3: ffff888145b6b4f8 (&port_dev->status_lock){+.+.}-{3:3}, at: hub_port_connect_change drivers/usb/core/hub.c:5499 [inline]
 #3: ffff888145b6b4f8 (&port_dev->status_lock){+.+.}-{3:3}, at: port_event drivers/usb/core/hub.c:5655 [inline]
 #3: ffff888145b6b4f8 (&port_dev->status_lock){+.+.}-{3:3}, at: hub_event+0x2278/0x57b0 drivers/usb/core/hub.c:5737
 #4: ffff8880b9927788 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x43d/0x770 kernel/sched/psi.c:952
 #5: ffffffff91cdce38 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x97/0x690 lib/debugobjects.c:665
4 locks held by kworker/u4:4/3680:
 #0: ffff888142a86938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x77a/0x11f0
 #1: 
ffffc9000561fd20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x7bd/0x11f0 kernel/workqueue.c:2264
 #2: ffff8880804120e0 (&type->s_umount_key#79){++++}-{3:3}, at: trylock_super+0x1b/0xf0 fs/super.c:415
 #3: ffff88808ce39140 (&sbi->gc_lock){+.+.}-{3:3}, at: f2fs_down_write fs/f2fs/f2fs.h:2207 [inline]
 #3: ffff88808ce39140 (&sbi->gc_lock){+.+.}-{3:3}, at: f2fs_balance_fs+0x4fb/0x6c0 fs/f2fs/segment.c:418
2 locks held by syz-executor.0/6287:
1 lock held by syz-executor.2/6407:
 #0: ffffffff8cf2c6c0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x48/0x5f0 kernel/rcu/tree.c:3953
2 locks held by syz-executor.0/6440:
2 locks held by syz-executor.0/6445:
1 lock held by syz-executor.3/6447:
 #0: ffffc9000aed90a8 (&kvm->slots_lock){+.+.}-{3:3}, at: kvm_free_pit+0x52/0x210 arch/x86/kvm/i8254.c:741

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted 6.1.27-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106
 nmi_cpu_backtrace+0x4e1/0x560 lib/nmi_backtrace.c:111
 nmi_trigger_cpumask_backtrace+0x1b0/0x3f0 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:220 [inline]
 watchdog+0xf18/0xf60 kernel/hung_task.c:377
 kthread+0x26e/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 6440 Comm: syz-executor.0 Not tainted 6.1.27-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
RIP: 0010:kasan_check_range+0x1/0x290 mm/kasan/generic.c:188
Code: 01 c6 48 89 c7 e8 1f 2c a2 08 31 c0 c3 0f 0b b8 ea ff ff ff c3 0f 0b b8 ea ff ff ff c3 cc cc cc cc cc cc cc cc cc cc cc cc 55 <41> 57 41 56 53 b0 01 48 85 f6 0f 84 9a 01 00 00 48 89 fd 48 01 f5
RSP: 0018:ffffc9000a9274d0 EFLAGS: 00000286
RAX: 0000000000000004 RBX: 0000000000000000 RCX: ffffffff81691cec
RDX: 0000000000000001 RSI: 0000000000000008 RDI: ffff888074becd28
RBP: ffffc9000a9275b0 R08: dffffc0000000000 R09: ffffed100e97d9a6
R10: 0000000000000000 R11: dffffc0000000001 R12: ffff888074becd28
R13: 1ffff92001524ea8 R14: ffff888037fb0000 R15: ffff888037fb0000
FS:  00007f46467dd700(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000000020395030 CR3: 00000000938f9000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 instrument_atomic_read_write include/linux/instrumented.h:102 [inline]
 atomic_long_try_cmpxchg_acquire include/linux/atomic/atomic-instrumented.h:1779 [inline]
 __mutex_trylock_common+0x16c/0x2e0 kernel/locking/mutex.c:129
 __mutex_trylock kernel/locking/mutex.c:152 [inline]
 __mutex_lock_common+0x1ef/0x2520 kernel/locking/mutex.c:606
 __mutex_lock kernel/locking/mutex.c:747 [inline]
 mutex_lock_nested+0x17/0x20 kernel/locking/mutex.c:799
 __unix_dgram_recvmsg+0x24d/0x1260 net/unix/af_unix.c:2448
 ____sys_recvmsg+0x285/0x530
 ___sys_recvmsg net/socket.c:2743 [inline]
 do_recvmmsg+0x46d/0xad0 net/socket.c:2837
 __sys_recvmmsg net/socket.c:2916 [inline]
 __do_sys_recvmmsg net/socket.c:2939 [inline]
 __se_sys_recvmmsg net/socket.c:2932 [inline]
 __x64_sys_recvmmsg+0x195/0x240 net/socket.c:2932
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f4647c8c169
Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 f1 19 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f46467dd168 EFLAGS: 00000246 ORIG_RAX: 000000000000012b
RAX: ffffffffffffffda RBX: 00007f4647dac050 RCX: 00007f4647c8c169
RDX: 0000000000010106 RSI: 00000000200000c0 RDI: 0000000000000005
RBP: 00007f4647ce7ca1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000002 R11: 0000000000000246 R12: 0000000000000000
R13: 00007ffe88caf26f R14: 00007f46467dd300 R15: 0000000000022000
 </TASK>

Crashes (20):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/05/10 03:08 linux-6.1.y ca48fc16c493 30aa2a7e .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in f2fs_balance_fs
2023/05/10 03:07 linux-6.1.y ca48fc16c493 30aa2a7e .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in f2fs_balance_fs
2023/05/02 00:52 linux-6.1.y ca48fc16c493 62df2017 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in f2fs_balance_fs
2023/05/02 00:18 linux-6.1.y ca48fc16c493 62df2017 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in f2fs_balance_fs
2023/04/01 21:31 linux-6.1.y 3b29299e5f60 f325deb0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in f2fs_balance_fs
2023/03/28 02:52 linux-6.1.y e3a87a10f259 47f3aaf1 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in f2fs_balance_fs
2023/03/27 16:43 linux-6.1.y e3a87a10f259 f8f96aa9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in f2fs_balance_fs
2023/03/26 00:05 linux-6.1.y e3a87a10f259 fbf0499a .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in f2fs_balance_fs
2023/03/21 00:10 linux-6.1.y 7eaef76fbc46 7939252e .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in f2fs_balance_fs
2023/03/19 17:24 linux-6.1.y 7eaef76fbc46 7939252e .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in f2fs_balance_fs
2023/03/07 21:49 linux-6.1.y 42616e0f09fb d7ea8bc4 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan INFO: task hung in f2fs_balance_fs
2023/04/23 11:48 linux-6.1.y f17b0ab65d17 2b32bd34 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 INFO: task hung in f2fs_balance_fs
2023/04/19 00:24 linux-6.1.y 0102425ac76b d931e9f0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 INFO: task hung in f2fs_balance_fs
2023/04/16 06:15 linux-6.1.y 0102425ac76b ec410564 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 INFO: task hung in f2fs_balance_fs
2023/04/15 02:26 linux-6.1.y 0102425ac76b ec410564 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 INFO: task hung in f2fs_balance_fs
2023/04/14 03:45 linux-6.1.y 0102425ac76b 3cfcaa1b .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 INFO: task hung in f2fs_balance_fs
2023/04/10 04:30 linux-6.1.y 543aff194ab6 71147e29 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 INFO: task hung in f2fs_balance_fs
2023/04/05 22:28 linux-6.1.y 3b29299e5f60 8b834965 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 INFO: task hung in f2fs_balance_fs
2023/04/03 00:40 linux-6.1.y 3b29299e5f60 f325deb0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 INFO: task hung in f2fs_balance_fs
2023/03/30 01:20 linux-6.1.y e3a87a10f259 f325deb0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-arm64 INFO: task hung in f2fs_balance_fs
* Struck through repros no longer work on HEAD.