syzbot


INFO: task hung in nilfs_segctor_thread (2)

Status: upstream: reported on 2024/02/19 10:54
Subsystems: nilfs
[Documentation on labels]
Reported-by: syzbot+c8166c541d3971bf6c87@syzkaller.appspotmail.com
First crash: 82d, last: 17h17m
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly nilfs report (May 2024) 0 (1) 2024/05/06 13:18
[syzbot] Monthly nilfs report (Mar 2024) 0 (1) 2024/03/05 11:10
[syzbot] [nilfs?] INFO: task hung in nilfs_segctor_thread (2) 1 (2) 2024/02/19 12:32
Similar bugs (4)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in nilfs_segctor_thread nilfs2 C 41 447d 548d 0/1 upstream: reported C repro on 2022/11/05 03:06
upstream INFO: task hung in nilfs_segctor_thread nilfs C error 94 370d 544d 22/26 fixed on 2023/06/08 14:41
linux-5.15 INFO: task hung in nilfs_segctor_thread 3 380d 392d 0/3 auto-obsoleted due to no activity on 2023/08/20 05:29
linux-4.14 INFO: task hung in nilfs_segctor_thread nilfs2 C 1 445d 445d 0/1 upstream: reported C repro on 2023/02/16 07:03

Sample crash report:
INFO: task segctord:13097 blocked for more than 143 seconds.
      Not tainted 6.9.0-rc7-syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:segctord        state:D stack:27736 pid:13097 tgid:13097 ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5409 [inline]
 __schedule+0x1796/0x4a00 kernel/sched/core.c:6746
 __schedule_loop kernel/sched/core.c:6823 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6838
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6895
 rwsem_down_write_slowpath+0xeeb/0x13b0 kernel/locking/rwsem.c:1178
 __down_write_common+0x1af/0x200 kernel/locking/rwsem.c:1306
 nilfs_transaction_lock+0x25d/0x4f0 fs/nilfs2/segment.c:357
 nilfs_segctor_thread_construct fs/nilfs2/segment.c:2488 [inline]
 nilfs_segctor_thread+0x535/0x1150 fs/nilfs2/segment.c:2573
 kthread+0x2f0/0x390 kernel/kthread.c:388
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Showing all locks held in the system:
2 locks held by kworker/u8:0/10:
 #0: ffff88801a35a948 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3242 [inline]
 #0: ffff88801a35a948 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3348
 #1: ffffc900000f7d00 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3243 [inline]
 #1: ffffc900000f7d00 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3348
1 lock held by khungtaskd/29:
 #0: ffffffff8e334da0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
 #0: ffffffff8e334da0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
 #0: ffffffff8e334da0 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
2 locks held by getty/4828:
 #0: ffff88802ac260a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f162f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2201
2 locks held by kworker/u8:9/31650:
2 locks held by syz-executor.0/13068:
1 lock held by segctord/13097:
 #0: ffff88805a08a2a0 (&nilfs->ns_segctor_sem){++++}-{3:3}, at: nilfs_transaction_lock+0x25d/0x4f0 fs/nilfs2/segment.c:357
1 lock held by syz-executor.3/13230:
2 locks held by syz-executor.4/14544:
 #0: ffff88802ba1e0e0 (&type->s_umount_key#73){++++}-{3:3}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff88802ba1e0e0 (&type->s_umount_key#73){++++}-{3:3}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff88802ba1e0e0 (&type->s_umount_key#73){++++}-{3:3}, at: deactivate_super+0xb5/0xf0 fs/super.c:504
 #1: ffffffff8e33a138 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
 #1: ffffffff8e33a138 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x39a/0x820 kernel/rcu/tree_exp.h:939
2 locks held by kworker/u8:11/15819:
1 lock held by syz-executor.2/15926:
 #0: ffff88801d4b2bc8 (mapping.invalidate_lock#2){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:850 [inline]
 #0: ffff88801d4b2bc8 (mapping.invalidate_lock#2){++++}-{3:3}, at: page_cache_ra_unbounded+0xfb/0x7a0 mm/readahead.c:225
2 locks held by syz-executor.2/16806:
2 locks held by syz-executor.3/16809:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 29 Comm: khungtaskd Not tainted 6.9.0-rc7-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xfde/0x1020 kernel/hung_task.c:380
 kthread+0x2f0/0x390 kernel/kthread.c:388
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 16806 Comm: syz-executor.2 Not tainted 6.9.0-rc7-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
RIP: 0010:__rcu_read_lock+0x0/0xb0 kernel/rcu/tree_plugin.h:401
Code: 00 00 0f 84 92 fc ff ff e9 62 fc ff ff e8 38 4b fa 09 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 <f3> 0f 1e fa 55 41 57 41 56 53 49 be 00 00 00 00 00 fc ff df 65 4c
RSP: 0018:ffffc900031bee98 EFLAGS: 00000246
RAX: ffffffff81a48e8a RBX: 00007f78e847c9ef RCX: 0000000000040000
RDX: ffffc90010c34000 RSI: 000000000003ffff RDI: 0000000000040000
RBP: 0000000000000001 R08: ffffffff8180163f R09: ffffffff814158df
R10: 0000000000000003 R11: ffff888025635a00 R12: ffff888025635a00
R13: ffffffff8181e6d0 R14: 0000000000000001 R15: 00007f78e847c9ef
FS:  00007f78e91ad6c0(0000) GS:ffff8880b9500000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fa545490000 CR3: 0000000023e20000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 rcu_read_lock include/linux/rcupdate.h:779 [inline]
 is_bpf_text_address+0x1f/0x2b0 kernel/bpf/core.c:767
 kernel_text_address+0xa7/0xe0 kernel/extable.c:125
 __kernel_text_address+0xd/0x40 kernel/extable.c:79
 unwind_get_return_address+0x5d/0xc0 arch/x86/kernel/unwind_orc.c:369
 arch_stack_walk+0x125/0x1b0 arch/x86/kernel/stacktrace.c:26
 stack_trace_save+0x118/0x1d0 kernel/stacktrace.c:122
 save_stack+0xfb/0x1f0 mm/page_owner.c:156
 __set_page_owner+0x8d/0x810 mm/page_owner.c:325
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x1ea/0x210 mm/page_alloc.c:1534
 prep_new_page mm/page_alloc.c:1541 [inline]
 get_page_from_freelist+0x3410/0x35b0 mm/page_alloc.c:3317
 __alloc_pages+0x256/0x6c0 mm/page_alloc.c:4575
 alloc_pages_mpol+0x3e8/0x680 mm/mempolicy.c:2264
 shmem_alloc_folio mm/shmem.c:1628 [inline]
 shmem_alloc_and_add_folio+0x24d/0xbc0 mm/shmem.c:1668
 shmem_get_folio_gfp+0x82d/0x1f50 mm/shmem.c:2055
 shmem_get_folio mm/shmem.c:2160 [inline]
 shmem_write_begin+0x170/0x4d0 mm/shmem.c:2744
 generic_perform_write+0x322/0x640 mm/filemap.c:3974
 shmem_file_write_iter+0xfc/0x120 mm/shmem.c:2920
 call_write_iter include/linux/fs.h:2110 [inline]
 new_sync_write fs/read_write.c:497 [inline]
 vfs_write+0xa84/0xcb0 fs/read_write.c:590
 ksys_write+0x1a0/0x2c0 fs/read_write.c:643
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f78e847c9ef
Code: 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 b9 80 02 00 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 44 24 08 e8 0c 81 02 00 48
RSP: 002b:00007f78e91ace80 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 0000000001000000 RCX: 00007f78e847c9ef
RDX: 0000000001000000 RSI: 00007f78de200000 RDI: 0000000000000003
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000009804
R10: 0000000000000002 R11: 0000000000000293 R12: 0000000000000003
R13: 00007f78e91acf80 R14: 00007f78e91acf40 R15: 00007f78de200000
 </TASK>

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/05/06 09:22 upstream dd5a440a31fa 610f2a54 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in nilfs_segctor_thread
2024/02/23 10:00 upstream 1c892cdd8fe0 8d446f15 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nilfs_segctor_thread
2024/02/15 00:10 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci f735966ee23c 6a8ec742 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in nilfs_segctor_thread
* Struck through repros no longer work on HEAD.