syzbot


INFO: task hung in path_openat (7)

Status: upstream: reported on 2022/10/06 10:29
Labels: xfs ext4 (incorrect?)
Reported-by: syzbot+950a0cdaa2fdd14f5bdc@syzkaller.appspotmail.com
First crash: 479d, last: 6d06h
Discussions (5)
Title Replies (including bot) Last reply
[syzbot] Monthly nilfs report (May 2023) 0 (1) 2023/05/29 08:50
[syzbot] Monthly nilfs report (Apr 2023) 0 (1) 2023/04/27 10:39
[syzbot] Monthly nilfs report 0 (1) 2023/03/27 11:03
[syzbot] [ext4] Monthly Report 0 (1) 2023/03/24 15:59
[syzbot] INFO: task hung in path_openat (7) 0 (1) 2022/10/06 10:29
Similar bugs (14)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in path_openat 2 1364d 1397d 0/1 auto-closed as invalid on 2020/01/11 07:40
upstream INFO: task hung in path_openat (4) 1 1135d 1135d 0/24 auto-closed as invalid on 2020/07/28 10:23
upstream INFO: task hung in path_openat (3) 4 1295d 1411d 0/24 auto-closed as invalid on 2020/02/19 20:16
linux-4.19 INFO: task hung in path_openat (2) 1 565d 565d 0/1 auto-closed as invalid on 2022/03/21 04:47
linux-4.14 INFO: task hung in path_openat 1 805d 805d 0/1 auto-closed as invalid on 2021/07/23 23:26
android-49 INFO: task hung in path_openat 64 1535d 1516d 0/3 auto-closed as invalid on 2019/09/22 08:41
upstream INFO: task hung in path_openat (5) 23 776d 937d 0/24 auto-closed as invalid on 2021/07/22 20:44
linux-4.19 INFO: task hung in path_openat (3) f2fs jfs 20 142d 301d 0/1 upstream: reported on 2022/08/11 13:14
upstream INFO: task hung in path_openat (6) 13 511d 662d 0/24 closed as invalid on 2022/02/07 19:19
android-414 INFO: task hung in path_openat 42 1439d 1518d 0/1 auto-closed as invalid on 2019/10/28 21:04
linux-6.1 INFO: task hung in path_openat 2 45d 65d 0/3 upstream: reported on 2023/04/04 07:33
upstream INFO: task hung in path_openat 246 1603d 1893d 0/24 closed as dup on 2018/09/08 15:37
linux-5.15 INFO: task hung in path_openat 2 42d 58d 0/3 upstream: reported on 2023/04/12 01:28
upstream INFO: task hung in path_openat (2) 1 1506d 1506d 0/24 closed as invalid on 2019/05/08 13:05

Sample crash report:
INFO: task syz-executor.1:12122 blocked for more than 143 seconds.
      Not tainted 6.4.0-rc4-syzkaller-00204-gc43a6ff9f93f #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1  state:D stack:28248 pid:12122 ppid:5034   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5343 [inline]
 __schedule+0x187b/0x4900 kernel/sched/core.c:6669
 schedule+0xc3/0x180 kernel/sched/core.c:6745
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6804
 rwsem_down_read_slowpath+0x5f4/0x950 kernel/locking/rwsem.c:1086
 __down_read_common kernel/locking/rwsem.c:1250 [inline]
 __down_read kernel/locking/rwsem.c:1263 [inline]
 down_read+0x9c/0x2f0 kernel/locking/rwsem.c:1522
 inode_lock_shared include/linux/fs.h:785 [inline]
 open_last_lookups fs/namei.c:3559 [inline]
 path_openat+0x7ab/0x3170 fs/namei.c:3788
 do_filp_open+0x234/0x490 fs/namei.c:3818
 do_sys_openat2+0x13f/0x500 fs/open.c:1356
 do_sys_open fs/open.c:1372 [inline]
 __do_sys_openat fs/open.c:1388 [inline]
 __se_sys_openat fs/open.c:1383 [inline]
 __x64_sys_openat+0x247/0x290 fs/open.c:1383
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7ff58568c169
RSP: 002b:00007ff58637b168 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007ff5857ac050 RCX: 00007ff58568c169
RDX: 0000000000000000 RSI: 0000000020000000 RDI: ffffffffffffff9c
RBP: 00007ff5856e7ca1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fff1dcd20ef R14: 00007ff58637b300 R15: 0000000000022000
 </TASK>
INFO: task syz-executor.1:12125 blocked for more than 144 seconds.
      Not tainted 6.4.0-rc4-syzkaller-00204-gc43a6ff9f93f #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1  state:D stack:27520 pid:12125 ppid:5034   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5343 [inline]
 __schedule+0x187b/0x4900 kernel/sched/core.c:6669
 schedule+0xc3/0x180 kernel/sched/core.c:6745
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6804
 rwsem_down_write_slowpath+0xedd/0x13a0 kernel/locking/rwsem.c:1178
 __down_write_common+0x1aa/0x200 kernel/locking/rwsem.c:1306
 inode_lock_nested include/linux/fs.h:810 [inline]
 filename_create+0x260/0x530 fs/namei.c:3884
 do_mkdirat+0xb7/0x520 fs/namei.c:4130
 __do_sys_mkdirat fs/namei.c:4153 [inline]
 __se_sys_mkdirat fs/namei.c:4151 [inline]
 __x64_sys_mkdirat+0x89/0xa0 fs/namei.c:4151
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7ff58568b187
RSP: 002b:00007ff586359f88 EFLAGS: 00000246 ORIG_RAX: 0000000000000102
RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff58568b187
RDX: 00000000000001ff RSI: 0000000020000240 RDI: 00000000ffffff9c
RBP: 0000000020021a40 R08: 0000000000000001 R09: 0000000000000000
R10: 0000000020000200 R11: 0000000000000246 R12: 0000000020000200
R13: 0000000020000240 R14: 00007ff586359fe0 R15: 00000000200218c0
 </TASK>
INFO: task syz-executor.1:12130 blocked for more than 144 seconds.
      Not tainted 6.4.0-rc4-syzkaller-00204-gc43a6ff9f93f #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1  state:D stack:28248 pid:12130 ppid:5034   flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5343 [inline]
 __schedule+0x187b/0x4900 kernel/sched/core.c:6669
 schedule+0xc3/0x180 kernel/sched/core.c:6745
 schedule_preempt_disabled+0x13/0x20 kernel/sched/core.c:6804
 rwsem_down_read_slowpath+0x5f4/0x950 kernel/locking/rwsem.c:1086
 __down_read_common kernel/locking/rwsem.c:1250 [inline]
 __down_read kernel/locking/rwsem.c:1263 [inline]
 down_read+0x9c/0x2f0 kernel/locking/rwsem.c:1522
 inode_lock_shared include/linux/fs.h:785 [inline]
 open_last_lookups fs/namei.c:3559 [inline]
 path_openat+0x7ab/0x3170 fs/namei.c:3788
 do_filp_open+0x234/0x490 fs/namei.c:3818
 do_sys_openat2+0x13f/0x500 fs/open.c:1356
 do_sys_open fs/open.c:1372 [inline]
 __do_sys_openat fs/open.c:1388 [inline]
 __se_sys_openat fs/open.c:1383 [inline]
 __x64_sys_openat+0x247/0x290 fs/open.c:1383
 do_syscall_x64 arch/x86/entry/common.c:50 [inline]
 do_syscall_64+0x41/0xc0 arch/x86/entry/common.c:80
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7ff58568c169
RSP: 002b:00007ff586339168 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: ffffffffffffffda RBX: 00007ff5857ac1f0 RCX: 00007ff58568c169
RDX: 0000000000000000 RSI: 0000000020004280 RDI: ffffffffffffff9c
RBP: 00007ff5856e7ca1 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fff1dcd20ef R14: 00007ff586339300 R15: 0000000000022000
 </TASK>

Showing all locks held in the system:
1 lock held by rcu_tasks_kthre/13:
 #0: ffffffff8cf276f0 (rcu_tasks.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:518
1 lock held by rcu_tasks_trace/14:
 #0: ffffffff8cf27ab0 (rcu_tasks_trace.tasks_gp_mutex){+.+.}-{3:3}, at: rcu_tasks_one_gp+0x29/0xd20 kernel/rcu/tasks.h:518
1 lock held by khungtaskd/28:
 #0: ffffffff8cf27520 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x0/0x30
2 locks held by getty/4751:
 #0: ffff88814b1a0098 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc900015802f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6ab/0x1db0 drivers/tty/n_tty.c:2176
2 locks held by kworker/1:3/24932:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc9000b82fd20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:10/24986:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc90014d67d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:16/24992:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc90014d97d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:20/30535:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc90006347d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:21/30536:
 #0: 
ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc90006357d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:23/30538:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc90006377d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:25/30540:
 #0: 
ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: 
ffffc90006397d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:26/30541:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc900063a7d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:29/30544:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc900063d7d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/u4:11/1566:
 #0: ffff8880b993c1d8 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:558
 #1: ffffc900107a7d20 ((work_completion)(&(&bat_priv->nc.work)->work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
1 lock held by syz-executor.4/1693:
 #0: ffffffff8cf2cbf8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:293 [inline]
 #0: ffffffff8cf2cbf8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x3a3/0x890 kernel/rcu/tree_exp.h:992
2 locks held by kworker/1:1/5465:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc900054cfd20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:4/5466:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc900057b7d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:7/5468:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc90005837d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:8/5469:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc90005807d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:9/5470:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc90005867d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
2 locks held by kworker/1:13/5472:
 #0: ffff888012472538 ((wq_completion)rcu_gp){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc90005887d20 ((work_completion)(&rew->rew_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
3 locks held by kworker/1:15/5474:
 #0: ffff888012470d38 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x77e/0x10e0 kernel/workqueue.c:2378
 #1: ffffc900058a7d20 ((work_completion)(&pwq->unbound_release_work)){+.+.}-{0:0}, at: process_one_work+0x7c8/0x10e0 kernel/workqueue.c:2380
 #2: ffffffff8cf2cbf8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: exp_funnel_lock kernel/rcu/tree_exp.h:325 [inline]
 #2: ffffffff8cf2cbf8 (rcu_state.exp_mutex){+.+.}-{3:3}, at: synchronize_rcu_expedited+0x46c/0x890 kernel/rcu/tree_exp.h:992
2 locks held by syz-executor.1/12120:
1 lock held by syz-executor.1/12122:
 #0: ffff88807c558cb0 (&type->i_mutex_dir_key#17){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:785 [inline]
 #0: ffff88807c558cb0 (&type->i_mutex_dir_key#17){++++}-{3:3}, at: open_last_lookups fs/namei.c:3559 [inline]
 #0: ffff88807c558cb0 (&type->i_mutex_dir_key#17){++++}-{3:3}, at: path_openat+0x7ab/0x3170 fs/namei.c:3788
1 lock held by syz-executor.1/12125:
 #0: ffff88807c558cb0 (&type->i_mutex_dir_key#17/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:810 [inline]
 #0: ffff88807c558cb0 (&type->i_mutex_dir_key#17/1){+.+.}-{3:3}, at: filename_create+0x260/0x530 fs/namei.c:3884
1 lock held by syz-executor.1/12130:
 #0: ffff88807c558cb0 (&type->i_mutex_dir_key#17){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:785 [inline]
 #0: ffff88807c558cb0 (&type->i_mutex_dir_key#17){++++}-{3:3}, at: open_last_lookups fs/namei.c:3559 [inline]
 #0: ffff88807c558cb0 (&type->i_mutex_dir_key#17){++++}-{3:3}, at: path_openat+0x7ab/0x3170 fs/namei.c:3788
2 locks held by syz-executor.3/12318:
1 lock held by syz-executor.3/12361:
 #0: ffff88807c6be770 (&type->i_mutex_dir_key#17){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:785 [inline]
 #0: ffff88807c6be770 (&type->i_mutex_dir_key#17){++++}-{3:3}, at: open_last_lookups fs/namei.c:3559 [inline]
 #0: ffff88807c6be770 (&type->i_mutex_dir_key#17){++++}-{3:3}, at: path_openat+0x7ab/0x3170 fs/namei.c:3788
1 lock held by syz-executor.3/12365:
 #0: ffff88807c6be770 (&type->i_mutex_dir_key#17/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:810 [inline]
 #0: ffff88807c6be770 (&type->i_mutex_dir_key#17/1){+.+.}-{3:3}, at: filename_create+0x260/0x530 fs/namei.c:3884
1 lock held by syz-executor.3/12366:
 #0: ffff88807c6be770 (&type->i_mutex_dir_key#17){++++}-{3:3}, at: inode_lock_shared include/linux/fs.h:785 [inline]
 #0: ffff88807c6be770 (&type->i_mutex_dir_key#17){++++}-{3:3}, at: open_last_lookups fs/namei.c:3559 [inline]
 #0: ffff88807c6be770 (&type->i_mutex_dir_key#17){++++}-{3:3}, at: path_openat+0x7ab/0x3170 fs/namei.c:3788
1 lock held by syz-executor.0/13337:
 #0: ffff88803559e0e0 (&type->s_umount_key#58/1){+.+.}-{3:3}, at: alloc_super+0x217/0x930 fs/super.c:228
1 lock held by syz-executor.3/13368:

=============================================

NMI backtrace for cpu 1
CPU: 1 PID: 28 Comm: khungtaskd Not tainted 6.4.0-rc4-syzkaller-00204-gc43a6ff9f93f #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e7/0x2d0 lib/dump_stack.c:106
 nmi_cpu_backtrace+0x498/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x187/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:148 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:222 [inline]
 watchdog+0xec2/0xf00 kernel/hung_task.c:379
 kthread+0x2b8/0x350 kernel/kthread.c:379
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:308
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 PID: 13339 Comm: syz-executor.2 Not tainted 6.4.0-rc4-syzkaller-00204-gc43a6ff9f93f #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/25/2023
RIP: 0010:trace_hardirqs_off+0x19/0x40 kernel/trace/trace_preemptirq.c:89
Code: ff ff ff 0f 0b e9 79 ff ff ff 0f 1f 80 00 00 00 00 f3 0f 1e fa 53 48 8b 5c 24 08 48 89 df e8 be 2b 1d 09 65 8b 05 ef 12 71 7e <85> c0 74 02 5b c3 65 c7 05 de 12 71 7e 01 00 00 00 48 89 df 5b e9
RSP: 0018:ffffc9001519f750 EFLAGS: 00000082
RAX: 0000000000000000 RBX: ffffffff8ab93ebd RCX: 0000000000000001
RDX: dffffc0000000000 RSI: ffffffff8aea8e20 RDI: ffffffff8b384540
RBP: ffffc9001519f7f8 R08: dffffc0000000000 R09: 0000000000000003
R10: ffffffffffffffff R11: dffffc0000000001 R12: ffff888078befac0
R13: 1ffff92002a33eec R14: ffffc9001519f780 R15: dffffc0000000000
FS:  00007f29d79f4700(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055a832d7a07c CR3: 0000000036977000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 __raw_spin_lock_irq include/linux/spinlock_api_smp.h:117 [inline]
 _raw_spin_lock_irq+0xad/0x120 kernel/locking/spinlock.c:170
 spin_lock_irq include/linux/spinlock.h:375 [inline]
 filemap_remove_folio+0xff/0x2e0 mm/filemap.c:256
 truncate_inode_folio+0x5d/0x70 mm/truncate.c:196
 shmem_undo_range+0x43d/0x1ba0 mm/shmem.c:950
 shmem_truncate_range mm/shmem.c:1049 [inline]
 shmem_evict_inode+0x258/0x9f0 mm/shmem.c:1164
 evict+0x2a4/0x620 fs/inode.c:665
 __dentry_kill+0x436/0x650 fs/dcache.c:607
 dentry_kill+0xbb/0x290
 dput+0x1f3/0x420 fs/dcache.c:913
 __fput+0x5e4/0x890 fs/file_table.c:329
 task_work_run+0x24a/0x300 kernel/task_work.c:179
 resume_user_mode_work include/linux/resume_user_mode.h:49 [inline]
 exit_to_user_mode_loop+0xd9/0x100 kernel/entry/common.c:171
 exit_to_user_mode_prepare+0xb1/0x140 kernel/entry/common.c:204
 __syscall_exit_to_user_mode_work kernel/entry/common.c:286 [inline]
 syscall_exit_to_user_mode+0x64/0x280 kernel/entry/common.c:297
 do_syscall_64+0x4d/0xc0 arch/x86/entry/common.c:86
 entry_SYSCALL_64_after_hwframe+0x63/0xcd
RIP: 0033:0x7f29d6c8bf57
Code: 3c 1c 48 f7 d8 49 39 c4 72 b8 e8 c4 57 02 00 85 c0 78 bd 48 83 c4 08 4c 89 e0 5b 41 5c c3 0f 1f 44 00 00 b8 10 00 00 00 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f29d79f3f88 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: 0000000000000000 RBX: 00000000000051ab RCX: 00007f29d6c8bf57
RDX: 0000000000000000 RSI: 0000000000004c01 RDI: 0000000000000004
RBP: 00007f29d79f46b8 R08: 00007f29d79f4020 R09: 0000000001000008
R10: 0000000001000008 R11: 0000000000000246 R12: ffffffffffffffff
R13: 0000000000000011 R14: 00007f29d79f3fe0 R15: 0000000020000280
 </TASK>

Crashes (205):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets Manager Title
2023/06/02 22:33 upstream c43a6ff9f93f a4ae4f42 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/05/31 13:50 upstream afead42fdfca 09898419 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/05/22 19:27 upstream 44c026a73be8 4bce1a3e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/05/18 22:54 upstream 4d6d4c7f541d 3bb7af1d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/05/14 09:38 upstream d4d58949a6ea 2b9ba477 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/05/08 08:11 upstream 17784de648be 90c93c40 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/05/07 22:56 upstream 17784de648be 90c93c40 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/05/05 11:38 upstream 3c4aa4434377 518a39a6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in path_openat
2023/05/01 18:03 upstream 58390c8ce1bd 62df2017 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/30 13:48 upstream 825a0714d2b3 62df2017 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/26 08:55 upstream 0cfd8703e7da 65320f8e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/25 10:06 upstream 1a0beef98b58 65320f8e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/24 11:37 upstream 457391b03803 2b32bd34 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/22 23:23 upstream 2caeeb9d4a1b 2b32bd34 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/22 13:47 upstream 8e41e0a57566 2b32bd34 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/21 22:49 upstream 2af3e53a4dc0 2b32bd34 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/21 05:53 upstream 6a66fdd29ea1 2b32bd34 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/20 15:03 upstream cb0856346a60 a219f34e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/17 18:57 upstream 6a8f57ae2eb0 c6ec7083 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/15 14:03 upstream 7a934f4bd7d6 ec410564 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/14 12:16 upstream 44149752e998 3cfcaa1b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/13 07:53 upstream 0bcc40255504 82d5e53e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/12 06:42 upstream e62252bc55b6 49faf98d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/08 19:10 upstream aa318c48808c 71147e29 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/08 14:40 upstream aa318c48808c 71147e29 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/06 17:25 upstream 99ddf2254feb 08707520 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in path_openat
2023/04/06 16:16 upstream 99ddf2254feb 08707520 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/06 15:08 upstream 99ddf2254feb 08707520 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/04 18:36 upstream 148341f0a2f5 928dd177 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in path_openat
2023/04/02 09:54 upstream 00c7b5f4ddc5 f325deb0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/04/01 00:19 upstream 5a57b48fdfcb f325deb0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/03/29 14:12 upstream fcd476ea6a88 f325deb0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/03/27 04:50 upstream 18940c888c85 fbf0499a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/03/25 19:40 upstream 65aca32efdcb fbf0499a .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/03/24 07:54 upstream 9fd6ba5420ba f94b4a29 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/03/21 11:30 upstream 17214b70a159 7939252e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/03/21 06:41 upstream 7d31677bb7b1 7939252e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/03/18 05:03 upstream 38e04b3e4240 7939252e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/03/16 16:38 upstream 9c1bec9c0b08 18b58603 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in path_openat
2023/03/15 12:42 upstream 6015b1aca1a2 18b58603 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/03/14 04:43 upstream fc89d7fb499b 026e2200 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in path_openat
2023/03/13 07:38 upstream eeac8ede1755 5205ef30 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/03/05 13:47 upstream b01fe98d34f3 f8902b57 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/03/05 03:40 upstream b01fe98d34f3 f8902b57 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/03/02 17:07 upstream ee3f96b16468 f8902b57 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in path_openat
2023/01/19 19:28 upstream 7287904c8771 1b826a2f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in path_openat
2022/10/06 05:03 upstream 2bca25eaeba6 2c6543ad .config console log report info [disk image] [vmlinux] ci2-upstream-fs INFO: task hung in path_openat
2022/10/01 15:49 upstream ffb4d94b4314 feb56351 .config console log report info [disk image] [vmlinux] ci2-upstream-fs INFO: task hung in path_openat
2022/02/14 18:00 upstream 754e0b0e3560 8b9ca619 .config console log report info ci-upstream-kasan-gce-root INFO: task hung in path_openat
2023/04/22 05:55 linux-next d3e1ee0e67e7 2b32bd34 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in path_openat
2023/04/08 22:02 linux-next e134c93f788f 71147e29 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in path_openat
2023/04/24 04:18 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 14f8db1c0f9a 2b32bd34 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in path_openat
2023/04/05 03:35 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 59caa87f9dfb 831373d3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in path_openat
2023/03/10 22:06 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci fe15c26ee26e 5205ef30 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in path_openat
* Struck through repros no longer work on HEAD.