syzbot


INFO: task hung in __f2fs_ioctl

Status: auto-obsoleted due to no activity on 2024/01/27 09:43
Subsystems: f2fs
[Documentation on labels]
First crash: 440d, last: 336d
Similar bugs (2)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in __f2fs_ioctl (2) f2fs 1 230d 230d 0/28 auto-obsoleted due to no activity on 2024/05/30 13:06
linux-6.1 INFO: task hung in __f2fs_ioctl 1 100d 100d 0/3 auto-obsoleted due to no activity on 2024/10/17 13:48

Sample crash report:
INFO: task syz-executor.3:13464 blocked for more than 143 seconds.
      Not tainted 6.7.0-rc1-syzkaller-00019-gc42d9eeef8e5 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.3  state:D stack:23384 pid:13464 tgid:13462 ppid:5092   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5376 [inline]
 __schedule+0x1961/0x4ab0 kernel/sched/core.c:6688
 __schedule_loop kernel/sched/core.c:6763 [inline]
 schedule+0x149/0x260 kernel/sched/core.c:6778
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6835
 rwsem_down_write_slowpath+0xeea/0x13b0 kernel/locking/rwsem.c:1178
 __down_write_common+0x1aa/0x200 kernel/locking/rwsem.c:1306
 inode_lock include/linux/fs.h:802 [inline]
 f2fs_ioc_commit_atomic_write fs/f2fs/file.c:2172 [inline]
 __f2fs_ioctl+0x35a4/0xbb70 fs/f2fs/file.c:4234
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:871 [inline]
 __se_sys_ioctl+0xf8/0x170 fs/ioctl.c:857
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x45/0x110 arch/x86/entry/common.c:82
 entry_SYSCALL_64_after_hwframe+0x63/0x6b
RIP: 0033:0x7f5f4567cae9
RSP: 002b:00007f5f4635c0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f5f4579bf80 RCX: 00007f5f4567cae9
RDX: 0000000000008301 RSI: 000000000000f502 RDI: 0000000000000006
RBP: 00007f5f456c847a R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000000b R14: 00007f5f4579bf80 R15: 00007fffcbe43d78
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/1:0/23:
 #0: ffff8880b993c358 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:558
 #1: ffff8880b9928808 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x441/0x770 kernel/sched/psi.c:988
 #2: ffff88814167b8c8 (&ACCESS_PRIVATE(ssp->srcu_sup, lock)){....}-{2:2}, at: spin_lock_irq include/linux/spinlock.h:376 [inline]
 #2: ffff88814167b8c8 (&ACCESS_PRIVATE(ssp->srcu_sup, lock)){....}-{2:2}, at: srcu_reschedule+0x45/0x170 kernel/rcu/srcutree.c:1768
1 lock held by khungtaskd/29:
 #0: ffffffff8d92d060 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:301 [inline]
 #0: ffffffff8d92d060 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:747 [inline]
 #0: ffffffff8d92d060 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6613
2 locks held by kworker/0:3/4702:
 #0: ffff888012c72938 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2605 [inline]
 #0: ffff888012c72938 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_scheduled_works+0x825/0x1420 kernel/workqueue.c:2703
 #1: ffffc9000354fd20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2605 [inline]
 #1: ffffc9000354fd20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_scheduled_works+0x825/0x1420 kernel/workqueue.c:2703
2 locks held by getty/4817:
 #0: ffff888140e8d0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f062f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b4/0x1e10 drivers/tty/n_tty.c:2201
4 locks held by kworker/u4:10/8198:
 #0: ffff888141e55138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2605 [inline]
 #0: ffff888141e55138 ((wq_completion)writeback){+.+.}-{0:0}, at: process_scheduled_works+0x825/0x1420 kernel/workqueue.c:2703
 #1: ffffc90005337d20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:2605 [inline]
 #1: ffffc90005337d20 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x825/0x1420 kernel/workqueue.c:2703
 #2: ffff88802624a0e0 (&type->s_umount_key#74){++++}-{3:3}, at: super_trylock_shared+0x22/0xf0 fs/super.c:610
 #3: ffff888039d212a8 (&sbi->gc_lock){+.+.}-{3:3}, at: f2fs_down_write fs/f2fs/f2fs.h:2133 [inline]
 #3: ffff888039d212a8 (&sbi->gc_lock){+.+.}-{3:3}, at: f2fs_balance_fs+0x500/0x730 fs/f2fs/segment.c:437
2 locks held by syz-executor.3/13464:
 #0: ffff88802624a418 (sb_writers#14){.+.+}-{0:0}, at: mnt_want_write_file+0x61/0x200 fs/namespace.c:448
 #1: ffff88803a749300 (&sb->s_type->i_mutex_key#22){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:802 [inline]
 #1: ffff88803a749300 (&sb->s_type->i_mutex_key#22){+.+.}-{3:3}, at: f2fs_ioc_commit_atomic_write fs/f2fs/file.c:2172 [inline]
 #1: ffff88803a749300 (&sb->s_type->i_mutex_key#22){+.+.}-{3:3}, at: __f2fs_ioctl+0x35a4/0xbb70 fs/f2fs/file.c:4234
7 locks held by syz-executor.3/13504:
2 locks held by syz-executor.1/15181:
 #0: ffff88802624a0e0 (&type->s_umount_key#74){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88802624a0e0 (&type->s_umount_key#74){++++}-{3:3}, at: super_lock+0x176/0x3a0 fs/super.c:117
 #1: ffff8881413ce7d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:364 [inline]
 #1: ffff8881413ce7d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: sync_inodes_sb+0x274/0xb20 fs/fs-writeback.c:2756
2 locks held by syz-executor.3/15214:
4 locks held by syz-executor.4/15216:
1 lock held by syz-executor.0/15217:
1 lock held by syz-executor.2/15222:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 29 Comm: khungtaskd Not tainted 6.7.0-rc1-syzkaller-00019-gc42d9eeef8e5 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x498/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x310 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
 watchdog+0xfaf/0xff0 kernel/hung_task.c:379
 kthread+0x2d3/0x370 kernel/kthread.c:388
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:242
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 15227 Comm: syz-executor.5 Not tainted 6.7.0-rc1-syzkaller-00019-gc42d9eeef8e5 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 11/10/2023
RIP: 0010:kasan_check_range+0xc/0x290 mm/kasan/generic.c:186
Code: 48 ff c8 48 39 d8 0f 84 3b ff ff ff 48 89 df 48 c7 c6 8c 8b 12 8d e8 c3 10 e7 ff 90 0f 0b 66 0f 1f 00 55 41 57 41 56 41 54 53 <b0> 01 48 85 f6 0f 84 a0 01 00 00 4c 8d 04 37 49 39 f8 0f 82 52 02
RSP: 0018:ffffc9000582f1f8 EFLAGS: 00000046
RAX: 0000000000000000 RBX: 0000000000000001 RCX: ffffffff81955fdc
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff8f00c128
RBP: ffffc9000582f308 R08: ffff88813fffad3f R09: 1ffff11027fff5a7
R10: dffffc0000000000 R11: ffffed1027fff5a8 R12: dffffc0000000000
R13: ffff88813fffa808 R14: 1ffff92000b05e4c R15: ffffc9000582f2a0
FS:  00007fbbf75006c0(0000) GS:ffff8880b9900000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fbbed1fe800 CR3: 000000002a9c9000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 instrument_atomic_read include/linux/instrumented.h:68 [inline]
 _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
 cpumask_test_cpu include/linux/cpumask.h:504 [inline]
 cpu_online include/linux/cpumask.h:1104 [inline]
 trace_irq_disable+0x2c/0xf0 include/trace/events/preemptirq.h:36
 seqcount_lockdep_reader_access+0x103/0x1e0 include/linux/seqlock.h:101
 read_seqbegin include/linux/seqlock.h:847 [inline]
 zone_span_seqbegin include/linux/memory_hotplug.h:134 [inline]
 page_outside_zone_boundaries mm/page_alloc.c:450 [inline]
 bad_range+0x5f/0x270 mm/page_alloc.c:469
 rmqueue mm/page_alloc.c:2917 [inline]
 get_page_from_freelist+0x33c5/0x3570 mm/page_alloc.c:3309
 __alloc_pages+0x255/0x680 mm/page_alloc.c:4568
 alloc_pages_mpol+0x3de/0x640 mm/mempolicy.c:2133
 shmem_alloc_folio mm/shmem.c:1613 [inline]
 shmem_alloc_and_add_folio+0x24f/0xde0 mm/shmem.c:1653
 shmem_get_folio_gfp+0x7c3/0x1ee0 mm/shmem.c:2037
 shmem_get_folio mm/shmem.c:2119 [inline]
 shmem_write_begin+0x170/0x4c0 mm/shmem.c:2702
 generic_perform_write+0x31b/0x630 mm/filemap.c:3918
 shmem_file_write_iter+0xfc/0x120 mm/shmem.c:2878
 call_write_iter include/linux/fs.h:2020 [inline]
 new_sync_write fs/read_write.c:491 [inline]
 vfs_write+0x792/0xb20 fs/read_write.c:584
 ksys_write+0x1a0/0x2c0 fs/read_write.c:637
 do_syscall_x64 arch/x86/entry/common.c:51 [inline]
 do_syscall_64+0x45/0x110 arch/x86/entry/common.c:82
 entry_SYSCALL_64_after_hwframe+0x63/0x6b
RIP: 0033:0x7fbbf687b82f
Code: 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 b9 80 02 00 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 44 24 08 e8 0c 81 02 00 48
RSP: 002b:00007fbbf74ffe70 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000000200000 RCX: 00007fbbf687b82f
RDX: 0000000000200000 RSI: 00007fbbecfff000 RDI: 0000000000000003
RBP: 0000000000000000 R08: 0000000000000000 R09: 000000000001ee44
R10: 000000002001eec2 R11: 0000000000000293 R12: 0000000000000003
R13: 00007fbbf74fff3c R14: 00007fbbf74fff40 R15: 00007fbbecfff000
 </TASK>

Crashes (21):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/11/16 10:32 upstream c42d9eeef8e5 cb976f63 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/11/09 20:20 upstream 6bc986ab839c 56230772 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __f2fs_ioctl
2023/11/08 17:23 upstream 305230142ae0 df3908d6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __f2fs_ioctl
2023/11/08 13:42 upstream 305230142ae0 b93f63e8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/11/01 04:10 upstream 89ed67ef126c 69904c9f .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/10/31 18:22 upstream 5a6a09e97199 58499c95 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/10/27 15:04 upstream 750b95887e56 3c418d72 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/10/21 21:26 upstream d537ae43f8a1 361b23dc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/10/14 05:36 upstream 8cb1f10d8c4b f757a323 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/10/12 04:55 upstream 8182d7a3f1b8 83165b57 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __f2fs_ioctl
2023/10/10 05:56 upstream 94f6f0550c62 c9be5398 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __f2fs_ioctl
2023/09/30 07:34 upstream 9f3ebbef746f 8e26a358 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/09/11 11:31 upstream 0bb80ecc33a8 59da8366 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/09/09 01:18 upstream a48fa7efaf11 6654cf89 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/08/27 11:19 upstream 28f20a19294d 7ba13a15 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/08/23 19:36 upstream 89bf6209cad6 b81ca3f6 .config console log report info ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/08/20 16:24 upstream 706a74159504 d216d8a0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/08/06 03:23 upstream f6a691685962 4ffcc9ef .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __f2fs_ioctl
2023/08/04 06:00 upstream 7bafbd4027ae 74621247 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __f2fs_ioctl
2023/08/26 16:54 linux-next 626932085009 03d9c195 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __f2fs_ioctl
2023/08/06 00:35 linux-next bdffb18b5dd8 4ffcc9ef .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in __f2fs_ioctl
* Struck through repros no longer work on HEAD.