syzbot


INFO: task hung in queue_log_writer

Status: auto-obsoleted due to no activity on 2023/12/25 14:05
Subsystems: reiserfs
[Documentation on labels]
Reported-by: syzbot+d7c9b7185ced98364b13@syzkaller.appspotmail.com
First crash: 519d, last: 361d
Cause bisection: the issue happens on the oldest tested release (bisect log)
Crash: general protection fault in reiserfs_security_init (log)
Repro: C syz .config
  
Fix bisection: the issue occurs on the latest tested release (bisect log)
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [reiserfs?] INFO: task hung in queue_log_writer 0 (2) 2023/05/02 08:04
Similar bugs (2)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in queue_log_writer reiserfs fat C 8 473d 502d 0/1 upstream: reported C repro on 2023/01/08 18:46
linux-4.14 INFO: task hung in queue_log_writer reiserfs 1 473d 473d 0/1 upstream: reported on 2023/02/06 08:12
Last patch testing requests (10)
Created Duration User Patch Repo Result
2023/12/25 12:55 22m retest repro linux-next OK log
2023/12/25 12:55 22m retest repro linux-next OK log
2023/12/25 12:55 23m retest repro linux-next OK log
2023/12/25 11:39 26m retest repro linux-next OK log
2023/12/25 11:39 23m retest repro linux-next OK log
2023/11/17 04:46 22m retest repro upstream OK log
2023/10/16 12:46 26m retest repro upstream OK log
2023/10/16 10:39 40m retest repro linux-next error OK
2023/10/16 10:39 21m retest repro linux-next error OK
2023/10/16 10:39 21m retest repro linux-next error OK

Sample crash report:
INFO: task kworker/u4:5:2413 blocked for more than 143 seconds.
      Not tainted 6.3.0-syzkaller-13466-gfc4354c6e5c2 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:5    state:D stack:24504 pid:2413  ppid:2      flags:0x00004000
Workqueue: writeback wb_workfn (flush-7:4)
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5343 [inline]
 __schedule+0xc9a/0x5880 kernel/sched/core.c:6669
 schedule+0xde/0x1a0 kernel/sched/core.c:6745
 queue_log_writer+0x293/0x2f0 fs/reiserfs/journal.c:2980
 check_journal_end fs/reiserfs/journal.c:3674 [inline]
 do_journal_end+0x908/0x4af0 fs/reiserfs/journal.c:4040
 reiserfs_write_inode+0x27e/0x2d0 fs/reiserfs/inode.c:1779
 write_inode fs/fs-writeback.c:1456 [inline]
 __writeback_single_inode+0x9f2/0xdb0 fs/fs-writeback.c:1668
 writeback_sb_inodes+0x54d/0xe70 fs/fs-writeback.c:1894
 wb_writeback+0x294/0xa50 fs/fs-writeback.c:2068
 wb_do_writeback fs/fs-writeback.c:2211 [inline]
 wb_workfn+0x2a5/0xfc0 fs/fs-writeback.c:2251
 process_one_work+0x99a/0x15e0 kernel/workqueue.c:2405
 worker_thread+0x67d/0x10c0 kernel/workqueue.c:2552
 kthread+0x344/0x440 kernel/kthread.c:379
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>
INFO: task kworker/u4:6:4258 blocked for more than 143 seconds.
      Not tainted 6.3.0-syzkaller-13466-gfc4354c6e5c2 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u4:6    state:D stack:24392 pid:4258  ppid:2      flags:0x00004000
Workqueue: writeback wb_workfn (flush-7:3)
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5343 [inline]
 __schedule+0xc9a/0x5880 kernel/sched/core.c:6669
 schedule+0xde/0x1a0 kernel/sched/core.c:6745
 queue_log_writer+0x293/0x2f0 fs/reiserfs/journal.c:2980
 check_journal_end fs/reiserfs/journal.c:3674 [inline]
 do_journal_end+0x908/0x4af0 fs/reiserfs/journal.c:4040
 reiserfs_write_inode+0x27e/0x2d0 fs/reiserfs/inode.c:1779
 write_inode fs/fs-writeback.c:1456 [inline]
 __writeback_single_inode+0x9f2/0xdb0 fs/fs-writeback.c:1668
 writeback_sb_inodes+0x54d/0xe70 fs/fs-writeback.c:1894
 wb_writeback+0x294/0xa50 fs/fs-writeback.c:2068
 wb_do_writeback fs/fs-writeback.c:2211 [inline]
 wb_workfn+0x2a5/0xfc0 fs/fs-writeback.c:2251
 process_one_work+0x99a/0x15e0 kernel/workqueue.c:2405
 worker_thread+0x67d/0x10c0 kernel/workqueue.c:2552
 kthread+0x344/0x440 kernel/kthread.c:379
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>
INFO: task syz-executor351:6222 blocked for more than 143 seconds.
      Not tainted 6.3.0-syzkaller-13466-gfc4354c6e5c2 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor351 state:D stack:24880 pid:6222  ppid:5001   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5343 [inline]
 __schedule+0xc9a/0x5880 kernel/sched/core.c:6669
 schedule+0xde/0x1a0 kernel/sched/core.c:6745
 bit_wait+0x16/0xe0 kernel/sched/wait_bit.c:199
 __wait_on_bit+0x64/0x180 kernel/sched/wait_bit.c:49
 __inode_wait_for_writeback+0x153/0x1f0 fs/fs-writeback.c:1477
 inode_wait_for_writeback+0x26/0x40 fs/fs-writeback.c:1489
 evict+0x2b7/0x6b0 fs/inode.c:662
 iput_final fs/inode.c:1747 [inline]
 iput.part.0+0x50a/0x740 fs/inode.c:1773
 iput+0x5c/0x80 fs/inode.c:1763
 dentry_unlink_inode+0x2b1/0x460 fs/dcache.c:401
 d_delete fs/dcache.c:2565 [inline]
 d_delete+0x16f/0x1c0 fs/dcache.c:2554
 xattr_unlink+0x139/0x190 fs/reiserfs/xattr.c:97
 lookup_and_delete_xattr fs/reiserfs/xattr.c:495 [inline]
 reiserfs_xattr_set_handle+0x7bd/0xb00 fs/reiserfs/xattr.c:530
 reiserfs_xattr_set+0x454/0x5b0 fs/reiserfs/xattr.c:634
 trusted_set+0xa7/0xd0 fs/reiserfs/xattr_trusted.c:31
 __vfs_removexattr+0x155/0x1c0 fs/xattr.c:519
 __vfs_removexattr_locked+0x1b0/0x440 fs/xattr.c:554
 vfs_removexattr+0xcf/0x260 fs/xattr.c:576
 ovl_do_removexattr fs/overlayfs/overlayfs.h:273 [inline]
 ovl_removexattr fs/overlayfs/overlayfs.h:281 [inline]
 ovl_make_workdir fs/overlayfs/super.c:1353 [inline]
 ovl_get_workdir fs/overlayfs/super.c:1436 [inline]
 ovl_fill_super+0x6ec5/0x7270 fs/overlayfs/super.c:1992
 mount_nodev+0x64/0x120 fs/super.c:1426
 legacy_get_tree+0x109/0x220 fs/fs_context.c:610
 vfs_get_tree+0x8d/0x350 fs/super.c:1510
 do_new_mount fs/namespace.c:3039 [inline]
 path_mount+0x134b/0x1e40 fs/namespace.c:3369
 do_mount fs/namespace.c:3382 [inline]
 __do_sys_mount fs/namespace.c:3591 [inline]
 __se_sys_mount fs/namespace.c:3568 [inline]
 __x64_sys_mount+0x283/0x300 fs/namespace.c:3568
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f38e1becaf9
RSP: 002b:00007f38e1b982f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f38e1c727a0 RCX: 00007f38e1becaf9
RDX: 0000000020000080 RSI: 00000000200000c0 RDI: 0000000000000000
RBP: 00007f38e1c3f2b8 R08: 0000000020000480 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0079616c7265766f
R13: d5e172a4510865ec R14: 9837512483e3bdcd R15: 00007f38e1c727a8
 </TASK>
INFO: task syz-executor351:6231 blocked for more than 144 seconds.
      Not tainted 6.3.0-syzkaller-13466-gfc4354c6e5c2 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor351 state:D stack:28496 pid:6231  ppid:5001   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5343 [inline]
 __schedule+0xc9a/0x5880 kernel/sched/core.c:6669
 schedule+0xde/0x1a0 kernel/sched/core.c:6745
 wb_wait_for_completion+0x182/0x240 fs/fs-writeback.c:192
 sync_inodes_sb+0x1aa/0xa60 fs/fs-writeback.c:2730
 sync_filesystem.part.0+0xe6/0x1d0 fs/sync.c:64
 sync_filesystem+0x8f/0xc0 fs/sync.c:43
 reiserfs_remount+0x129/0x1650 fs/reiserfs/super.c:1445
 legacy_reconfigure+0x119/0x180 fs/fs_context.c:633
 reconfigure_super+0x40c/0xa30 fs/super.c:956
 do_remount fs/namespace.c:2701 [inline]
 path_mount+0x1846/0x1e40 fs/namespace.c:3361
 do_mount fs/namespace.c:3382 [inline]
 __do_sys_mount fs/namespace.c:3591 [inline]
 __se_sys_mount fs/namespace.c:3568 [inline]
 __x64_sys_mount+0x283/0x300 fs/namespace.c:3568
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f38e1bedfba
RSP: 002b:00007f38d9b77118 EFLAGS: 00000286 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f38d9b776b8 RCX: 00007f38e1bedfba
RDX: 00000000200001c0 RSI: 0000000020000100 RDI: 0000000000000000
RBP: 00000000ffffffff R08: 00007f38d9b771b0 R09: 0000000000000000
R10: 0000000001a484bc R11: 0000000000000286 R12: 00000000200001c0
R13: 0000000020000100 R14: 0000000000000000 R15: 00000000200006c0
 </TASK>
INFO: task syz-executor351:6396 blocked for more than 144 seconds.
      Not tainted 6.3.0-syzkaller-13466-gfc4354c6e5c2 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor351 state:D stack:24728 pid:6396  ppid:5002   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5343 [inline]
 __schedule+0xc9a/0x5880 kernel/sched/core.c:6669
 schedule+0xde/0x1a0 kernel/sched/core.c:6745
 bit_wait+0x16/0xe0 kernel/sched/wait_bit.c:199
 __wait_on_bit+0x64/0x180 kernel/sched/wait_bit.c:49
 __inode_wait_for_writeback+0x153/0x1f0 fs/fs-writeback.c:1477
 inode_wait_for_writeback+0x26/0x40 fs/fs-writeback.c:1489
 evict+0x2b7/0x6b0 fs/inode.c:662
 iput_final fs/inode.c:1747 [inline]
 iput.part.0+0x50a/0x740 fs/inode.c:1773
 iput+0x5c/0x80 fs/inode.c:1763
 dentry_unlink_inode+0x2b1/0x460 fs/dcache.c:401
 d_delete fs/dcache.c:2565 [inline]
 d_delete+0x16f/0x1c0 fs/dcache.c:2554
 xattr_unlink+0x139/0x190 fs/reiserfs/xattr.c:97
 lookup_and_delete_xattr fs/reiserfs/xattr.c:495 [inline]
 reiserfs_xattr_set_handle+0x7bd/0xb00 fs/reiserfs/xattr.c:530
 reiserfs_xattr_set+0x454/0x5b0 fs/reiserfs/xattr.c:634
 trusted_set+0xa7/0xd0 fs/reiserfs/xattr_trusted.c:31
 __vfs_removexattr+0x155/0x1c0 fs/xattr.c:519
 __vfs_removexattr_locked+0x1b0/0x440 fs/xattr.c:554
 vfs_removexattr+0xcf/0x260 fs/xattr.c:576
 ovl_do_removexattr fs/overlayfs/overlayfs.h:273 [inline]
 ovl_removexattr fs/overlayfs/overlayfs.h:281 [inline]
 ovl_make_workdir fs/overlayfs/super.c:1353 [inline]
 ovl_get_workdir fs/overlayfs/super.c:1436 [inline]
 ovl_fill_super+0x6ec5/0x7270 fs/overlayfs/super.c:1992
 mount_nodev+0x64/0x120 fs/super.c:1426
 legacy_get_tree+0x109/0x220 fs/fs_context.c:610
 vfs_get_tree+0x8d/0x350 fs/super.c:1510
 do_new_mount fs/namespace.c:3039 [inline]
 path_mount+0x134b/0x1e40 fs/namespace.c:3369
 do_mount fs/namespace.c:3382 [inline]
 __do_sys_mount fs/namespace.c:3591 [inline]
 __se_sys_mount fs/namespace.c:3568 [inline]
 __x64_sys_mount+0x283/0x300 fs/namespace.c:3568
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f38e1becaf9
RSP: 002b:00007f38e1b982f8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f38e1c727a0 RCX: 00007f38e1becaf9
RDX: 0000000020000080 RSI: 00000000200000c0 RDI: 0000000000000000
RBP: 00007f38e1c3f2b8 R08: 0000000020000480 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0079616c7265766f
R13: d5e172a4510865ec R14: 9837512483e3bdcd R15: 00007f38e1c727a8
 </TASK>
INFO: task syz-executor351:6404 blocked for more than 145 seconds.
      Not tainted 6.3.0-syzkaller-13466-gfc4354c6e5c2 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor351 state:D stack:27696 pid:6404  ppid:5002   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5343 [inline]
 __schedule+0xc9a/0x5880 kernel/sched/core.c:6669
 schedule+0xde/0x1a0 kernel/sched/core.c:6745
 wb_wait_for_completion+0x182/0x240 fs/fs-writeback.c:192
 sync_inodes_sb+0x1aa/0xa60 fs/fs-writeback.c:2730
 sync_filesystem.part.0+0xe6/0x1d0 fs/sync.c:64
 sync_filesystem+0x8f/0xc0 fs/sync.c:43
 reiserfs_remount+0x129/0x1650 fs/reiserfs/super.c:1445
 legacy_reconfigure+0x119/0x180 fs/fs_context.c:633
 reconfigure_super+0x40c/0xa30 fs/super.c:956
 do_remount fs/namespace.c:2701 [inline]
 path_mount+0x1846/0x1e40 fs/namespace.c:3361
 do_mount fs/namespace.c:3382 [inline]
 __do_sys_mount fs/namespace.c:3591 [inline]
 __se_sys_mount fs/namespace.c:3568 [inline]
 __x64_sys_mount+0x283/0x300 fs/namespace.c:3568
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f38e1bedfba
RSP: 002b:00007f38d9b77118 EFLAGS: 00000286 ORIG_RAX: 00000000000000a5
RAX: ffffffffffffffda RBX: 00007f38d9b776b8 RCX: 00007f38e1bedfba
RDX: 00000000200001c0 RSI: 0000000020000100 RDI: 0000000000000000
RBP: 00000000ffffffff R08: 00007f38d9b771b0 R09: 0000000000000000
R10: 0000000001a484bc R11: 0000000000000286 R12: 00000000200001c0
R13: 0000000020000100 R14: 0000000000000000 R15: 00000000200006c0
 </TASK>

Showing all locks held in the system:
2 locks held by kworker/u4:1/12:
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1324 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:643 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:670 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x883/0x15e0 kernel/workqueue.c:2376
 #1: ffffc90000117db0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8b7/0x15e0 kernel/workqueue.c:2380
1 lock held by rcu_tasks_kthre/13:
 #0: ffffffff8c798430 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x31/0xd80 kernel/rcu/tasks.h:518
1 lock held by rcu_tasks_trace/14:
 #0: ffffffff8c798130 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x31/0xd80 kernel/rcu/tasks.h:518
1 lock held by khungtaskd/28:
 #0: ffffffff8c799040 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x340 kernel/locking/lockdep.c:6545
2 locks held by kworker/u4:5/2413:
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1324 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:643 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:670 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x883/0x15e0 kernel/workqueue.c:2376
 #1: ffffc9000ae6fdb0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8b7/0x15e0 kernel/workqueue.c:2380
2 locks held by kworker/0:3/3478:
 #0: ffff888012472538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff888012472538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff888012472538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1324 [inline]
 #0: ffff888012472538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:643 [inline]
 #0: ffff888012472538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:670 [inline]
 #0: ffff888012472538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x883/0x15e0 kernel/workqueue.c:2376
 #1: ffffc9000d887db0 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x8b7/0x15e0 kernel/workqueue.c:2380
2 locks held by kworker/u4:6/4258:
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: arch_atomic64_set arch/x86/include/asm/atomic64_64.h:34 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: arch_atomic_long_set include/linux/atomic/atomic-long.h:41 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: atomic_long_set include/linux/atomic/atomic-instrumented.h:1324 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: set_work_data kernel/workqueue.c:643 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: set_work_pool_and_clear_pending kernel/workqueue.c:670 [inline]
 #0: ffff88814164d938 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x883/0x15e0 kernel/workqueue.c:2376
 #1: ffffc900031dfdb0 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x8b7/0x15e0 kernel/workqueue.c:2380
2 locks held by getty/4750:
 #0: ffff888028787098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x26/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc900015c02f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xef4/0x13e0 drivers/tty/n_tty.c:2176
3 locks held by strace-static-x/4994:
 #0: 
ffff8880b993c5d8 (&rq->__lock){-.-.}-{2:2}, at: mm_access+0x4c/0x150 kernel/fork.c:1561
 #1: ffffffff8c799040 (rcu_read_lock){....}-{1:2}, at: __skb_pull include/linux/skbuff.h:2637 [inline]
 #1: ffffffff8c799040 (rcu_read_lock){....}-{1:2}, at: ip_local_deliver_finish+0x20a/0x520 net/ipv4/ip_input.c:230
 #2: ffff8880b993c5d8 (&rq->__lock){-.-.}-{2:2}, at: wait_task_stopped kernel/exit.c:1285 [inline]
 #2: ffff8880b993c5d8 (&rq->__lock){-.-.}-{2:2}, at: wait_consider_task+0x611/0x3ce0 kernel/exit.c:1471
2 locks held by syz-executor351/5003:
 #0: ffff88807bb0c0e0 (&type->s_umount_key#42){+.+.}-{3:3}, at: deactivate_super+0xa9/0xd0 fs/super.c:361
 #1: ffffffff8c7a44b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:293 [inline]
 #1: ffffffff8c7a44b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x64a/0x770 kernel/rcu/tree_exp.h:992
4 locks held by syz-executor351/6222:
 #0: ffff888077c5c0e0 (&type->s_umount_key#41/1){+.+.}-{3:3}, at: alloc_super+0x22e/0xb60 fs/super.c:228
 #1: ffff88807b6cc460 (sb_writers#9){.+.+}-{0:0}, at: ovl_make_workdir fs/overlayfs/super.c:1282 [inline]
 #1: ffff88807b6cc460 (sb_writers#9){.+.+}-{0:0}, at: ovl_get_workdir fs/overlayfs/super.c:1436 [inline]
 #1: ffff88807b6cc460 (sb_writers#9){.+.+}-{0:0}, at: ovl_fill_super+0x1c5e/0x7270 fs/overlayfs/super.c:1992
 #2: ffff8880787cb7e0 (&type->i_mutex_dir_key#6){++++}-{3:3}, at: inode_lock include/linux/fs.h:775 [inline]
 #2: ffff8880787cb7e0 (&type->i_mutex_dir_key#6){++++}-{3:3}, at: vfs_removexattr+0xbb/0x260 fs/xattr.c:575
 #3: ffff888070895260 (&type->i_mutex_dir_key#6/3){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:810 [inline]
 #3: ffff888070895260 (&type->i_mutex_dir_key#6/3){+.+.}-{3:3}, at: lookup_and_delete_xattr fs/reiserfs/xattr.c:487 [inline]
 #3: ffff888070895260 (&type->i_mutex_dir_key#6/3){+.+.}-{3:3}, at: reiserfs_xattr_set_handle+0x72c/0xb00 fs/reiserfs/xattr.c:530
2 locks held by syz-executor351/6231:
 #0: ffff88807b6cc0e0 (&type->s_umount_key#42){+.+.}-{3:3}, at: do_remount fs/namespace.c:2698 [inline]
 #0: ffff88807b6cc0e0 (&type->s_umount_key#42){+.+.}-{3:3}, at: path_mount+0x1401/0x1e40 fs/namespace.c:3361
 #1: ffff88801f1cc7d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:364 [inline]
 #1: ffff88801f1cc7d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: sync_inodes_sb+0x190/0xa60 fs/fs-writeback.c:2728
4 locks held by syz-executor351/6396:
 #0: 
ffff88806fffa0e0
 (&type->s_umount_key#41/1){+.+.}-{3:3}, at: alloc_super+0x22e/0xb60 fs/super.c:228
 #1: ffff88802c614460 (sb_writers#9){.+.+}-{0:0}, at: ovl_make_workdir fs/overlayfs/super.c:1282 [inline]
 #1: ffff88802c614460 (sb_writers#9){.+.+}-{0:0}, at: ovl_get_workdir fs/overlayfs/super.c:1436 [inline]
 #1: ffff88802c614460 (sb_writers#9){.+.+}-{0:0}, at: ovl_fill_super+0x1c5e/0x7270 fs/overlayfs/super.c:1992
 #2: ffff8880708bd260 (&type->i_mutex_dir_key#6){++++}-{3:3}, at: inode_lock include/linux/fs.h:775 [inline]
 #2: ffff8880708bd260 (&type->i_mutex_dir_key#6){++++}-{3:3}, at: vfs_removexattr+0xbb/0x260 fs/xattr.c:575
 #3: ffff8880787ce640 (&type->i_mutex_dir_key#6/3){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:810 [inline]
 #3: ffff8880787ce640 (&type->i_mutex_dir_key#6/3){+.+.}-{3:3}, at: lookup_and_delete_xattr fs/reiserfs/xattr.c:487 [inline]
 #3: ffff8880787ce640 (&type->i_mutex_dir_key#6/3){+.+.}-{3:3}, at: reiserfs_xattr_set_handle+0x72c/0xb00 fs/reiserfs/xattr.c:530
2 locks held by syz-executor351/6404:
 #0: ffff88802c6140e0 (&type->s_umount_key#42){+.+.}-{3:3}, at: do_remount fs/namespace.c:2698 [inline]
 #0: ffff88802c6140e0 (&type->s_umount_key#42){+.+.}-{3:3}, at: path_mount+0x1401/0x1e40 fs/namespace.c:3361
 #1: ffff88801f1da7d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:364 [inline]
 #1: ffff88801f1da7d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: sync_inodes_sb+0x190/0xa60 fs/fs-writeback.c:2728
5 locks held by kworker/u4:7/7985:
4 locks held by syz-executor351/9883:
 #0: ffff888079f840e0 (&type->s_umount_key#41/1){+.+.}-{3:3}, at: alloc_super+0x22e/0xb60 fs/super.c:228
 #1: ffff888078a74460 (sb_writers#9){.+.+}-{0:0}, at: ovl_make_workdir fs/overlayfs/super.c:1282 [inline]
 #1: ffff888078a74460 (sb_writers#9){.+.+}-{0:0}, at: ovl_get_workdir fs/overlayfs/super.c:1436 [inline]
 #1: ffff888078a74460 (sb_writers#9){.+.+}-{0:0}, at: ovl_fill_super+0x1c5e/0x7270 fs/overlayfs/super.c:1992
 #2: ffff8880757aaaa0 (&type->i_mutex_dir_key#6){++++}-{3:3}, at: inode_lock include/linux/fs.h:775 [inline]
 #2: ffff8880757aaaa0 (&type->i_mutex_dir_key#6){++++}-{3:3}, at: vfs_removexattr+0xbb/0x260 fs/xattr.c:575
 #3: ffff8880757a82e0 (&type->i_mutex_dir_key#6/3){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:810 [inline]
 #3: ffff8880757a82e0 (&type->i_mutex_dir_key#6/3){+.+.}-{3:3}, at: lookup_and_delete_xattr fs/reiserfs/xattr.c:487 [inline]
 #3: ffff8880757a82e0 (&type->i_mutex_dir_key#6/3){+.+.}-{3:3}, at: reiserfs_xattr_set_handle+0x72c/0xb00 fs/reiserfs/xattr.c:530
2 locks held by syz-executor351/9895:
 #0: ffff888078a740e0 (&type->s_umount_key#42){+.+.}-{3:3}, at: do_remount fs/namespace.c:2698 [inline]
 #0: ffff888078a740e0 (&type->s_umount_key#42){+.+.}-{3:3}, at: path_mount+0x1401/0x1e40 fs/namespace.c:3361
 #1: ffff88801f16c7d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: bdi_down_write_wb_switch_rwsem fs/fs-writeback.c:364 [inline]
 #1: ffff88801f16c7d0 (&bdi->wb_switch_rwsem){+.+.}-{3:3}, at: sync_inodes_sb+0x190/0xa60 fs/fs-writeback.c:2728
2 locks held by syz-executor351/10009:
 #0: ffff88806f5500e0 (&type->s_umount_key#41/1){+.+.}-{3:3}, at: alloc_super+0x22e/0xb60 fs/super.c:228
 #1: ffffffff8c7a44b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:325 [inline]
 #1: ffffffff8c7a44b8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3e8/0x770 kernel/rcu/tree_exp.h:992
1 lock held by syz-executor351/10010:
 #0: ffff8880705760e0 (&type->s_umount_key#41/1){+.+.}-{3:3}, at: alloc_super+0x22e/0xb60 fs/super.c:228

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted 6.3.0-syzkaller-13466-gfc4354c6e5c2 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0xd9/0x150 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x29c/0x350 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x2a4/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
 watchdog+0xe16/0x1090 kernel/hung_task.c:379
 kthread+0x344/0x440 kernel/kthread.c:379
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 10016 Comm: syz-executor351 Not tainted 6.3.0-syzkaller-13466-gfc4354c6e5c2 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/14/2023
RIP: 0010:get_page_from_freelist+0x195/0x2c00 mm/page_alloc.c:3401
Code: 00 00 00 49 8d 47 20 48 89 84 24 20 01 00 00 48 c1 e8 03 4c 01 f0 48 89 44 24 40 8b 04 24 25 00 01 00 00 4d 85 ed 89 44 24 54 <0f> 84 e0 05 00 00 0f 1f 44 00 00 48 8b 44 24 40 0f b6 00 84 c0 74
RSP: 0018:ffffc90004f874a0 EFLAGS: 00000286
RAX: 0000000000000100 RBX: 1ffff920009f0ece RCX: 0000000000000170
RDX: 1ffff11027fff860 RSI: 0000000000000000 RDI: 0000000000140cca
RBP: 0000000000140cca R08: 0000000000000001 R09: ffffffff8e7a6fd7
R10: 0000000000000000 R11: 0000000000000000 R12: 0000000000000000
R13: ffff88813fffae00 R14: dffffc0000000000 R15: ffffc90004f876c0
FS:  00007f38e1b98700(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f38d9b77000 CR3: 00000000197d0000 CR4: 0000000000350ef0
Call Trace:
 <TASK>
 __alloc_pages+0x1cb/0x4a0 mm/page_alloc.c:4768
 __folio_alloc+0x16/0x40 mm/page_alloc.c:4800
 vma_alloc_folio+0x155/0x890 mm/mempolicy.c:2240
 shmem_alloc_folio+0x119/0x1e0 mm/shmem.c:1579
 shmem_alloc_and_acct_folio+0x15e/0x5d0 mm/shmem.c:1603
 shmem_get_folio_gfp+0x9cc/0x1a80 mm/shmem.c:1948
 shmem_get_folio mm/shmem.c:2079 [inline]
 shmem_write_begin+0x14a/0x380 mm/shmem.c:2573
 generic_perform_write+0x256/0x570 mm/filemap.c:3923
 __generic_file_write_iter+0x2ae/0x500 mm/filemap.c:4051
 generic_file_write_iter+0xe3/0x350 mm/filemap.c:4083
 call_write_iter include/linux/fs.h:1868 [inline]
 new_sync_write fs/read_write.c:491 [inline]
 vfs_write+0x945/0xd50 fs/read_write.c:584
 ksys_write+0x12b/0x250 fs/read_write.c:637
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x39/0xb0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f38e1ba8eff
Code: 89 54 24 18 48 89 74 24 10 89 7c 24 08 e8 99 fd ff ff 48 8b 54 24 18 48 8b 74 24 10 41 89 c0 8b 7c 24 08 b8 01 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 44 24 08 e8 cc fd ff ff 48
RSP: 002b:00007f38e1b980f0 EFLAGS: 00000293 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f38e1b986b8 RCX: 00007f38e1ba8eff
RDX: 0000000000400000 RSI: 00007f38d9778000 RDI: 0000000000000003
RBP: 0000000000000000 R08: 0000000000000000 R09: 00000000000010e8
R10: 0000000000400000 R11: 0000000000000293 R12: 0000000020001100
R13: 0000000020001140 R14: 00000000000010ee R15: 0000000020000340
 </TASK>

Crashes (110):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2023/05/07 17:56 upstream fc4354c6e5c2 90c93c40 .config strace log report syz C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-kasan-gce-root INFO: task hung in queue_log_writer
2023/05/02 08:03 upstream c8c655c34e33 62df2017 .config strace log report syz C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-kasan-gce-root INFO: task hung in queue_log_writer
2023/05/29 06:13 linux-next 715abedee4cd cf184559 .config console log report syz C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in queue_log_writer
2023/05/28 08:26 linux-next 715abedee4cd cf184559 .config console log report syz C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in queue_log_writer
2023/05/24 07:32 linux-next 715abedee4cd 4bce1a3e .config strace log report syz C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in queue_log_writer
2023/05/19 01:29 linux-next 715abedee4cd 3bb7af1d .config console log report syz C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in queue_log_writer
2023/05/03 10:23 linux-next 92e815cf07ed 48e0a81d .config strace log report syz C [disk image] [vmlinux] [kernel image] [mounted in repro] ci-upstream-linux-next-kasan-gce-root INFO: task hung in queue_log_writer
2023/03/13 18:25 upstream eeac8ede1755 026e2200 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in queue_log_writer
2023/02/23 01:18 upstream 5b7c4cabbb65 409945bc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in queue_log_writer
2023/02/20 20:08 upstream c9c3395d5e3d 4f5f5209 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in queue_log_writer
2023/02/19 13:43 upstream 925cf0457d7e bcdf85f8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in queue_log_writer
2023/02/18 23:22 upstream 38f8ccde04a3 bcdf85f8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in queue_log_writer
2023/02/17 15:16 upstream ec35307e18ba 3e7039f4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in queue_log_writer
2023/02/11 01:05 upstream 38c1e0c65865 95871dcc .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in queue_log_writer
2023/02/08 06:53 upstream 513c1a3d3f19 15c3d445 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/02/07 15:36 upstream 05ecb680708a 15c3d445 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in queue_log_writer
2023/02/06 14:25 upstream d2d11f342b17 0a9c11b6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/02/06 05:31 upstream 4ec5183ec486 be607b78 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/02/06 02:33 upstream 4ec5183ec486 be607b78 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/02/04 07:06 upstream 7b753a909f42 1b2f701a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in queue_log_writer
2023/02/03 18:26 upstream 66a87fff1a87 1b2f701a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in queue_log_writer
2023/02/03 07:02 upstream 66a87fff1a87 16d19e30 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in queue_log_writer
2023/02/03 02:36 upstream e7368fd30165 33fc5c09 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/02/02 22:47 upstream e7368fd30165 16d19e30 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/02/02 14:02 upstream 9f266ccaa2f5 16d19e30 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/02/02 06:48 upstream 9f266ccaa2f5 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in queue_log_writer
2023/02/01 21:15 upstream c0b67534c95c 9a6f477c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/02/01 14:46 upstream c0b67534c95c 9a6f477c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/02/01 13:41 upstream c0b67534c95c 9a6f477c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/02/01 05:22 upstream c0b67534c95c b68fb8d6 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/30 08:49 upstream ab072681eabe 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/29 21:34 upstream ab072681eabe 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/28 23:36 upstream 5af6ce704936 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/28 22:08 upstream 5af6ce704936 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/28 02:01 upstream 83abd4d4c4be 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in queue_log_writer
2023/01/27 22:56 upstream 83abd4d4c4be 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/27 19:16 upstream 7c46948a6e9c 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/27 15:33 upstream 7c46948a6e9c 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/27 04:18 upstream 7c46948a6e9c 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/26 01:26 upstream 7c46948a6e9c 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/25 19:35 upstream 948ef7bb70c4 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/24 03:29 upstream 7bf70dbb1882 9dfcf09c .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/23 13:29 upstream 2475bf0250de 44388686 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/22 07:30 upstream 2241ab53cbb5 559a440a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in queue_log_writer
2023/01/21 12:44 upstream edc00350d205 cc0f9968 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/21 01:19 upstream edc00350d205 cc0f9968 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/20 23:24 upstream edc00350d205 dd15ff29 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/20 22:20 upstream ff83fec8179e 559a440a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in queue_log_writer
2023/01/20 10:03 upstream d368967cb103 dd15ff29 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/19 05:59 upstream c1649ec55708 4620c2d9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/19 03:32 upstream c1649ec55708 4620c2d9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/18 18:01 upstream c1649ec55708 4620c2d9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/13 12:29 upstream c757fc92a3f7 96166539 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in queue_log_writer
2023/01/09 15:31 upstream 1fe4fd6f5cad 1dac8c7a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/01/04 23:51 upstream 69b41ac87e4a 1dac8c7a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in queue_log_writer
2023/02/11 05:53 linux-next 38d2b86a665b 93e26d60 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in queue_log_writer
2023/02/06 21:31 linux-next 129af7708234 0a9c11b6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in queue_log_writer
* Struck through repros no longer work on HEAD.