syzbot


INFO: task hung in gfs2_gl_hash_clear (4)

Status: auto-obsoleted due to no activity on 2024/10/11 16:48
Subsystems: gfs2
[Documentation on labels]
First crash: 109d, last: 109d
Similar bugs (7)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in gfs2_gl_hash_clear gfs2 156 1456d 1461d 0/28 auto-closed as invalid on 2021/02/02 22:47
upstream INFO: task hung in gfs2_gl_hash_clear (2) gfs2 1 877d 875d 0/28 auto-closed as invalid on 2022/09/04 20:20
linux-5.15 INFO: task hung in gfs2_gl_hash_clear origin:lts-only C inconclusive 2168 23m 456d 0/3 upstream: reported C repro on 2023/08/01 02:35
linux-6.1 INFO: task hung in gfs2_gl_hash_clear missing-backport origin:lts-only C done 1593 118d 444d 0/3 upstream: reported C repro on 2023/08/13 12:57
linux-4.19 INFO: task hung in gfs2_gl_hash_clear gfs2 1 666d 666d 0/1 upstream: reported on 2023/01/04 01:01
upstream INFO: task can't die in gfs2_gl_hash_clear (2) gfs2 3 1287d 1381d 0/28 auto-closed as invalid on 2021/07/21 03:15
upstream INFO: task hung in gfs2_gl_hash_clear (3) gfs2 C error done 80 313d 701d 25/28 fixed on 2024/01/30 15:47

Sample crash report:
INFO: task syz-executor:6153 blocked for more than 143 seconds.
      Not tainted 6.10.0-rc7-syzkaller-00254-g528dd46d0fc3 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D
 stack:19984 pid:6153  tgid:6153  ppid:1      flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5408 [inline]
 __schedule+0x1796/0x49d0 kernel/sched/core.c:6745
 __schedule_loop kernel/sched/core.c:6822 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6837
 schedule_timeout+0x1be/0x310 kernel/time/timer.c:2581
 gfs2_gl_hash_clear+0x1a8/0x470 fs/gfs2/glock.c:2266
 gfs2_put_super+0x8d0/0x940 fs/gfs2/super.c:650
 generic_shutdown_super+0x136/0x2d0 fs/super.c:642
 kill_block_super+0x44/0x90 fs/super.c:1685
 deactivate_locked_super+0xc4/0x130 fs/super.c:473
 cleanup_mnt+0x41f/0x4b0 fs/namespace.c:1267
 task_work_run+0x24f/0x310 kernel/task_work.c:180
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:114 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline]
 __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
 syscall_exit_to_user_mode+0x168/0x360 kernel/entry/common.c:218
 do_syscall_64+0x100/0x230 arch/x86/entry/common.c:89
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb986d76f07
RSP: 002b:00007ffd550331f8 EFLAGS: 00000202 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 0000000000000064 RCX: 00007fb986d76f07
RDX: 0000000000000200 RSI: 0000000000000009 RDI: 00007ffd550343a0
RBP: 00007fb986de3515 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000100 R11: 0000000000000202 R12: 00007ffd550343a0
R13: 00007fb986de3515 R14: 0000555574ccd4a8 R15: 000000000002c445
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/30:
 #0: ffffffff8e333f20 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
 #0: ffffffff8e333f20 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
 #0: ffffffff8e333f20 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
3 locks held by kworker/1:1/46:
 #0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3223 [inline]
 #0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3329
 #1: ffffc90000b67d00 ((fqdir_free_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3224 [inline]
 #1: ffffc90000b67d00 ((fqdir_free_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3329
 #2: ffffffff8e3391c0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x4c/0x530 kernel/rcu/tree.c:4448
3 locks held by kworker/u8:5/991:
 #0: ffff888015ed3148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3223 [inline]
 #0: ffff888015ed3148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3329
 #1: ffffc90003fd7d00 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3224 [inline]
 #1: ffffc90003fd7d00 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3329
 #2: ffffffff8e3391c0 (rcu_state.barrier_mutex){+.+.}-{3:3}, at: rcu_barrier+0x4c/0x530 kernel/rcu/tree.c:4448
2 locks held by getty/4831:
 #0: ffff88802ad1c0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f162f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2211
4 locks held by kworker/1:5/5130:
 #0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3223 [inline]
 #0: ffff888015080948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x90a/0x1830 kernel/workqueue.c:3329
 #1: ffffc900041b7d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3224 [inline]
 #1: ffffc900041b7d00 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x945/0x1830 kernel/workqueue.c:3329
 #2: ffffffff8f5d82c8 (rtnl_mutex){+.+.}-{3:3}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:276
 #3: ffffffff8e3392f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:291 [inline]
 #3: ffffffff8e3392f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x381/0x830 kernel/rcu/tree_exp.h:939
3 locks held by kworker/u8:8/5930:
1 lock held by syz-executor/6153:
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: deactivate_super+0xb5/0xf0 fs/super.c:505
1 lock held by syz.2.301/8358:
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: super_lock+0x27c/0x400 fs/super.c:120
1 lock held by syz.2.301/8360:
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: super_lock+0x27c/0x400 fs/super.c:120
1 lock held by syz.0.277/8647:
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: super_lock+0x27c/0x400 fs/super.c:120
1 lock held by syz.4.336/8744:
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: super_lock+0x27c/0x400 fs/super.c:120
1 lock held by syz.1.350/8839:
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: super_lock+0x27c/0x400 fs/super.c:120
1 lock held by syz.4.430/9472:
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: super_lock+0x27c/0x400 fs/super.c:120
1 lock held by syz.1.446/9541:
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: super_lock+0x196/0x400 fs/super.c:120
1 lock held by syz.0.450/9554:
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888065c660e0 (&type->s_umount_key#78){++++}-{3:3}, at: super_lock+0x27c/0x400 fs/super.c:120
1 lock held by syz-executor/9574:
 #0: ffffffff8f5d82c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnl_lock net/core/rtnetlink.c:79 [inline]
 #0: ffffffff8f5d82c8 (rtnl_mutex){+.+.}-{3:3}, at: rtnetlink_rcv_msg+0x842/0x1180 net/core/rtnetlink.c:6632
1 lock held by syz-executor/9578:
 #0: ffffffff8e3392f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:323 [inline]
 #0: ffffffff8e3392f8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:939
1 lock held by rm/9620:
 #0: ffff8880b943e758 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:559

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 30 Comm: khungtaskd Not tainted 6.10.0-rc7-syzkaller-00254-g528dd46d0fc3 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xfde/0x1020 kernel/hung_task.c:379
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 5125 Comm: kworker/1:4 Not tainted 6.10.0-rc7-syzkaller-00254-g528dd46d0fc3 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 06/07/2024
Workqueue: events drain_vmap_area_work
RIP: 0010:bytes_is_nonzero mm/kasan/generic.c:87 [inline]
RIP: 0010:memory_is_nonzero mm/kasan/generic.c:104 [inline]
RIP: 0010:memory_is_poisoned_n mm/kasan/generic.c:129 [inline]
RIP: 0010:memory_is_poisoned mm/kasan/generic.c:161 [inline]
RIP: 0010:check_region_inline mm/kasan/generic.c:180 [inline]
RIP: 0010:kasan_check_range+0x86/0x290 mm/kasan/generic.c:189
Code: 00 fc ff df 4f 8d 3c 31 4c 89 fd 4c 29 dd 48 83 fd 10 7f 29 48 85 ed 0f 84 3e 01 00 00 4c 89 cd 48 f7 d5 48 01 dd 41 80 3b 00 <0f> 85 c9 01 00 00 49 ff c3 48 ff c5 75 ee e9 1e 01 00 00 45 89 dc
RSP: 0018:ffffc90004187488 EFLAGS: 00000046
RAX: 0000000000000001 RBX: 1ffffffff25eecb0 RCX: ffffffff8172d9da
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff92f76580
RBP: ffffffffffffffff R08: ffffffff92f76587 R09: 1ffffffff25eecb0
R10: dffffc0000000000 R11: fffffbfff25eecb0 R12: ffff888065bc4778
R13: dffffc0000000000 R14: dffffc0000000001 R15: fffffbfff25eecb1
FS:  0000000000000000(0000) GS:ffff8880b9500000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000561bb7e75950 CR3: 0000000048056000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 instrument_atomic_read include/linux/instrumented.h:68 [inline]
 _test_bit include/asm-generic/bitops/instrumented-non-atomic.h:141 [inline]
 hlock_class kernel/locking/lockdep.c:228 [inline]
 mark_lock+0x9a/0x350 kernel/locking/lockdep.c:4656
 mark_held_locks kernel/locking/lockdep.c:4274 [inline]
 __trace_hardirqs_on_caller kernel/locking/lockdep.c:4300 [inline]
 lockdep_hardirqs_on_prepare+0x3a5/0x780 kernel/locking/lockdep.c:4359
 trace_hardirqs_on+0x28/0x40 kernel/trace/trace_preemptirq.c:61
 __raw_spin_unlock_irqrestore include/linux/spinlock_api_smp.h:151 [inline]
 _raw_spin_unlock_irqrestore+0x8f/0x140 kernel/locking/spinlock.c:194
 __debug_check_no_obj_freed lib/debugobjects.c:998 [inline]
 debug_check_no_obj_freed+0x561/0x580 lib/debugobjects.c:1019
 free_pages_prepare mm/page_alloc.c:1100 [inline]
 free_unref_page+0x38a/0xea0 mm/page_alloc.c:2588
 kasan_depopulate_vmalloc_pte+0x74/0x90 mm/kasan/shadow.c:408
 apply_to_pte_range mm/memory.c:2746 [inline]
 apply_to_pmd_range mm/memory.c:2790 [inline]
 apply_to_pud_range mm/memory.c:2826 [inline]
 apply_to_p4d_range mm/memory.c:2862 [inline]
 __apply_to_page_range+0x8a8/0xe50 mm/memory.c:2896
 kasan_release_vmalloc+0x9a/0xb0 mm/kasan/shadow.c:525
 purge_vmap_node+0x3e3/0x770 mm/vmalloc.c:2207
 __purge_vmap_area_lazy+0x708/0xae0 mm/vmalloc.c:2289
 drain_vmap_area_work+0x27/0x40 mm/vmalloc.c:2323
 process_one_work kernel/workqueue.c:3248 [inline]
 process_scheduled_works+0xa2c/0x1830 kernel/workqueue.c:3329
 worker_thread+0x86d/0xd50 kernel/workqueue.c:3409
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/07/13 16:39 upstream 528dd46d0fc3 eaeb5c15 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in gfs2_gl_hash_clear
* Struck through repros no longer work on HEAD.