syzbot


INFO: task hung in __bread_gfp (5)

Status: auto-obsoleted due to no activity on 2024/07/30 08:01
Subsystems: ntfs3
[Documentation on labels]
First crash: 215d, last: 215d
Similar bugs (6)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in __bread_gfp (3) fs 1 1122d 1122d 0/28 auto-closed as invalid on 2022/02/04 04:37
upstream INFO: task hung in __bread_gfp exfat 4 2339d 2435d 0/28 auto-closed as invalid on 2019/02/22 10:29
linux-5.15 INFO: task hung in __bread_gfp 3 227d 241d 0/3 auto-obsoleted due to no activity on 2024/07/28 07:35
android-49 INFO: task hung in __bread_gfp 2 2421d 2434d 0/3 auto-closed as invalid on 2019/02/22 14:34
upstream INFO: task hung in __bread_gfp (2) jfs 5 1656d 1663d 0/28 auto-closed as invalid on 2020/07/20 01:01
upstream INFO: task hung in __bread_gfp (4) reiserfs C error 3 647d 708d 0/28 auto-obsoleted due to no activity on 2023/07/25 23:59

Sample crash report:
INFO: task syz-executor.1:5925 blocked for more than 143 seconds.
      Not tainted 6.9.0-rc6-syzkaller-00046-g18daea77cca6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1  state:D stack:23064 pid:5925  tgid:5923  ppid:5087   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5409 [inline]
 __schedule+0x1796/0x4a00 kernel/sched/core.c:6746
 __schedule_loop kernel/sched/core.c:6823 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6838
 io_schedule+0x8d/0x110 kernel/sched/core.c:9044
 bit_wait_io+0x12/0xd0 kernel/sched/wait_bit.c:209
 __wait_on_bit+0xb0/0x2f0 kernel/sched/wait_bit.c:49
 out_of_line_wait_on_bit+0x1d5/0x260 kernel/sched/wait_bit.c:64
 wait_on_bit_io include/linux/wait_bit.h:101 [inline]
 __wait_on_buffer fs/buffer.c:123 [inline]
 wait_on_buffer include/linux/buffer_head.h:389 [inline]
 __bread_slow fs/buffer.c:1268 [inline]
 __bread_gfp+0x329/0x430 fs/buffer.c:1477
 sb_bread_unmovable include/linux/buffer_head.h:327 [inline]
 ntfs_bread+0xc2/0x1e0 fs/ntfs3/fsntfs.c:1025
 wnd_map+0x296/0x470 fs/ntfs3/bitmap.c:699
 wnd_set_used+0x1ae/0x5d0 fs/ntfs3/bitmap.c:778
 ntfs_look_free_mft+0x8cc/0x10c0 fs/ntfs3/fsntfs.c:722
 ntfs_create_inode+0x5ec/0x3ce0 fs/ntfs3/inode.c:1335
 ntfs_mkdir+0x42/0x60 fs/ntfs3/namei.c:222
 vfs_mkdir+0x2f9/0x4b0 fs/namei.c:4123
 do_mkdirat+0x264/0x3a0 fs/namei.c:4146
 __do_sys_mkdirat fs/namei.c:4161 [inline]
 __se_sys_mkdirat fs/namei.c:4159 [inline]
 __x64_sys_mkdirat+0x89/0xa0 fs/namei.c:4159
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf5/0x240 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7ff15fa7c9a7
RSP: 002b:00007ff16083aef8 EFLAGS: 00000246 ORIG_RAX: 0000000000000102
RAX: ffffffffffffffda RBX: 00007ff16083af80 RCX: 00007ff15fa7c9a7
RDX: 00000000000001ff RSI: 0000000020000100 RDI: 00000000ffffff9c
RBP: 0000000020000180 R08: 0000000020000000 R09: 0000000000000000
R10: 0000000020000180 R11: 0000000000000246 R12: 0000000020000100
R13: 00007ff16083af40 R14: 0000000000000000 R15: 0000000000000000
 </TASK>

Showing all locks held in the system:
1 lock held by init/1:
2 locks held by kworker/u8:0/10:
1 lock held by khungtaskd/29:
 #0: ffffffff8e334d60 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
 #0: ffffffff8e334d60 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
 #0: ffffffff8e334d60 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6614
3 locks held by kworker/u8:2/34:
4 locks held by kworker/u9:1/4469:
 #0: ffff888204233148 ((wq_completion)hci3){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3242 [inline]
 #0: ffff888204233148 ((wq_completion)hci3){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3348
 #1: ffffc9000da07d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3243 [inline]
 #1: ffffc9000da07d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3348
 #2: ffff888204aa5060 (&hdev->req_lock){+.+.}-{3:3}, at: hci_cmd_sync_work+0x1ec/0x400 net/bluetooth/hci_sync.c:309
 #3: ffff888204aa4078 (&hdev->lock){+.+.}-{3:3}, at: hci_abort_conn_sync+0x1ea/0xde0 net/bluetooth/hci_sync.c:5548
1 lock held by jbd2/sda1-8/4488:
2 locks held by udevd/4525:
2 locks held by getty/4820:
 #0: ffff88802a7980a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002efe2f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0x6b5/0x1e10 drivers/tty/n_tty.c:2201
1 lock held by syz-fuzzer/5061:
3 locks held by syz-fuzzer/5067:
2 locks held by syz-fuzzer/5079:
4 locks held by kworker/u9:2/5083:
4 locks held by kworker/u9:3/5085:
 #0: ffff88805c154148 ((wq_completion)hci0){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3242 [inline]
 #0: ffff88805c154148 ((wq_completion)hci0){+.+.}-{0:0}, at: process_scheduled_works+0x8e0/0x17c0 kernel/workqueue.c:3348
 #1: ffffc90003987d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3243 [inline]
 #1: ffffc90003987d00 ((work_completion)(&hdev->cmd_sync_work)){+.+.}-{0:0}, at: process_scheduled_works+0x91b/0x17c0 kernel/workqueue.c:3348
 #2: ffff888196099060 (&hdev->req_lock){+.+.}-{3:3}, at: hci_cmd_sync_work+0x1ec/0x400 net/bluetooth/hci_sync.c:309
 #3: ffff888196098078 (&hdev->lock){+.+.}-{3:3}, at: hci_abort_conn_sync+0x1ea/0xde0 net/bluetooth/hci_sync.c:5548
4 locks held by kworker/u9:5/5093:
4 locks held by syz-executor.1/5925:
 #0: ffff88806935e420 (sb_writers#23){.+.+}-{0:0}, at: mnt_want_write+0x3f/0x90 fs/namespace.c:409
 #1: ffff88805a8019c0 (&type->i_mutex_dir_key#15/1){+.+.}-{3:3}, at: inode_lock_nested include/linux/fs.h:830 [inline]
 #1: ffff88805a8019c0 (&type->i_mutex_dir_key#15/1){+.+.}-{3:3}, at: filename_create+0x260/0x540 fs/namei.c:3892
 #2: ffff88805a801720 (&ni->ni_lock#2/5){+.+.}-{3:3}, at: ni_lock_dir fs/ntfs3/ntfs_fs.h:1128 [inline]
 #2: ffff88805a801720 (&ni->ni_lock#2/5){+.+.}-{3:3}, at: ntfs_create_inode+0x1f8/0x3ce0 fs/ntfs3/inode.c:1249
 #3: ffff88806935c128 (&wnd->rw_lock/1){+.+.}-{3:3}, at: ntfs_look_free_mft+0x1e5/0x10c0 fs/ntfs3/fsntfs.c:571
2 locks held by udevd/6085:
2 locks held by syz-executor.4/6764:
3 locks held by syz-executor.3/6799:
1 lock held by syz-executor.1/6842:
3 locks held by sed/6993:

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 29 Comm: khungtaskd Not tainted 6.9.0-rc6-syzkaller-00046-g18daea77cca6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:114
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xfde/0x1020 kernel/hung_task.c:380
 kthread+0x2f0/0x390 kernel/kthread.c:388
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 6842 Comm: syz-executor.1 Not tainted 6.9.0-rc6-syzkaller-00046-g18daea77cca6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024
RIP: 0010:zs_shrinker_count+0x58/0x200 mm/zsmalloc.c:2034
Code: 48 c1 e8 03 42 80 3c 30 00 74 08 48 89 df e8 7f 65 f4 ff 48 8b 03 48 89 44 24 08 4c 8d b8 f8 07 00 00 bd fe 00 00 00 45 31 ed <4c> 89 f8 48 c1 e8 03 42 80 3c 30 00 74 08 4c 89 ff e8 52 65 f4 ff
RSP: 0000:ffffc900033b6450 EFLAGS: 00000287
RAX: ffffffff8202fce1 RBX: 00000000000000be RCX: ffff888023161e00
RDX: 0000000000000000 RSI: 00000000000000be RDI: 00000000000000bd
RBP: 00000000000000bc R08: ffffffff8202fc01 R09: 1ffffffff1f4f99d
R10: dffffc0000000000 R11: ffffffff8202fb60 R12: ffff88802faf9400
R13: 0000000000000000 R14: dffffc0000000000 R15: ffff88802f6ee5e8
FS:  0000555583eab480(0000) GS:ffff8880b9500000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f330b0a4c66 CR3: 00000001ff742000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 do_shrink_slab+0x86/0x1160 mm/shrinker.c:382
 shrink_slab+0x1092/0x14d0 mm/shrinker.c:662
 shrink_node_memcgs mm/vmscan.c:5875 [inline]
 shrink_node+0x11f5/0x2d60 mm/vmscan.c:5908
 shrink_zones mm/vmscan.c:6152 [inline]
 do_try_to_free_pages+0x695/0x1af0 mm/vmscan.c:6214
 try_to_free_pages+0x760/0x1100 mm/vmscan.c:6449
 __perform_reclaim mm/page_alloc.c:3774 [inline]
 __alloc_pages_direct_reclaim mm/page_alloc.c:3796 [inline]
 __alloc_pages_slowpath+0xdc3/0x23d0 mm/page_alloc.c:4202
 __alloc_pages+0x43e/0x6c0 mm/page_alloc.c:4588
 alloc_pages_mpol+0x3e8/0x680 mm/mempolicy.c:2264
 __read_swap_cache_async+0x23f/0x8b0 mm/swap_state.c:470
 swap_cluster_readahead+0x676/0x800 mm/swap_state.c:697
 swapin_readahead+0x1e0/0x1080 mm/swap_state.c:904
 do_swap_page+0x79b/0x4280 mm/memory.c:4048
 handle_pte_fault mm/memory.c:5303 [inline]
 __handle_mm_fault+0x1583/0x7250 mm/memory.c:5441
 handle_mm_fault+0x27f/0x770 mm/memory.c:5606
 do_user_addr_fault arch/x86/mm/fault.c:1413 [inline]
 handle_page_fault arch/x86/mm/fault.c:1505 [inline]
 exc_page_fault+0x2a8/0x8e0 arch/x86/mm/fault.c:1563
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
RIP: 0010:__get_user_8+0x11/0x20 arch/x86/lib/getuser.S:88
Code: ca c3 cc cc cc cc 0f 1f 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 48 89 c2 48 c1 fa 3f 48 09 d0 0f 01 cb <48> 8b 10 31 c0 0f 01 ca c3 cc cc cc cc 66 90 90 90 90 90 90 90 90
RSP: 0000:ffffc900033b7d78 EFLAGS: 00050202
RAX: 0000555583eabda8 RBX: ffff888023163350 RCX: ffffc900033b7c03
RDX: 0000000000000000 RSI: ffffffff8bcaca20 RDI: ffffffff8c1eaaa0
RBP: ffffc900033b7ec0 R08: ffffffff8fa7ccef R09: 1ffffffff1f4f99d
R10: dffffc0000000000 R11: fffffbfff1f4f99e R12: ffffc900033b7d80
R13: ffffc900033b7fd8 R14: dffffc0000000000 R15: ffff888023161e00
 rseq_get_rseq_cs kernel/rseq.c:161 [inline]
 rseq_ip_fixup kernel/rseq.c:281 [inline]
 __rseq_handle_notify_resume+0x159/0x14e0 kernel/rseq.c:329
 rseq_handle_notify_resume include/linux/rseq.h:38 [inline]
 resume_user_mode_work include/linux/resume_user_mode.h:62 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:114 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline]
 irqentry_exit_to_user_mode+0xbc/0x280 kernel/entry/common.c:231
 exc_page_fault+0x585/0x8e0 arch/x86/mm/fault.c:1566
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:623
RIP: 0033:0x7f330b0a4c90
Code: Unable to access opcode bytes at 0x7f330b0a4c66.
RSP: 002b:00007ffef22d96e8 EFLAGS: 00010206
RAX: 000000000000002c RBX: 00007f330bcd4620 RCX: 0000000000000000
RDX: 0000000000001000 RSI: 00007f330bcd4670 RDI: 0000000000000003
RBP: 0000000000000001 R08: 00007ffef22d9734 R09: 000000000000000c
R10: 0000000000000000 R11: 0000000000000293 R12: 0000000000000003
R13: 0000000000000000 R14: 00007f330bcd4670 R15: 0000000000000000
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/05/01 07:56 upstream 18daea77cca6 3ba885bc .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __bread_gfp
* Struck through repros no longer work on HEAD.