syzbot


INFO: task hung in __sync_dirty_buffer (2)

Status: auto-obsoleted due to no activity on 2024/10/09 06:51
Subsystems: ntfs3
[Documentation on labels]
First crash: 201d, last: 120d
Similar bugs (5)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.14 INFO: task hung in __sync_dirty_buffer ext4 nilfs2 C error 9 625d 1501d 0/1 upstream: reported C repro on 2020/09/29 05:34
linux-4.19 INFO: task hung in __sync_dirty_buffer ext4 nilfs2 C error 25 614d 1489d 0/1 upstream: reported C repro on 2020/10/11 09:03
linux-6.1 INFO: task hung in __sync_dirty_buffer 2 549d 572d 0/3 auto-obsoleted due to no activity on 2023/08/23 09:07
linux-5.15 INFO: task hung in __sync_dirty_buffer 10 543d 580d 0/3 auto-obsoleted due to no activity on 2023/08/22 15:19
upstream INFO: task hung in __sync_dirty_buffer ext4 C inconclusive error 832 515d 1164d 22/28 fixed on 2023/07/01 16:05

Sample crash report:
INFO: task syz-executor.1:10755 blocked for more than 143 seconds.
      Not tainted 6.9.0-syzkaller-10323-g8f6a15f095a6 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor.1  state:D stack:22048 pid:10755 tgid:10752 ppid:9516   flags:0x00004002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5408 [inline]
 __schedule+0xf15/0x5d00 kernel/sched/core.c:6745
 __schedule_loop kernel/sched/core.c:6822 [inline]
 schedule+0xe7/0x350 kernel/sched/core.c:6837
 io_schedule+0xbf/0x130 kernel/sched/core.c:9043
 bit_wait_io+0x15/0xe0 kernel/sched/wait_bit.c:209
 __wait_on_bit+0x62/0x180 kernel/sched/wait_bit.c:49
 out_of_line_wait_on_bit+0xda/0x110 kernel/sched/wait_bit.c:64
 wait_on_bit_io include/linux/wait_bit.h:101 [inline]
 __wait_on_buffer fs/buffer.c:123 [inline]
 wait_on_buffer include/linux/buffer_head.h:415 [inline]
 __sync_dirty_buffer+0x261/0x370 fs/buffer.c:2871
 ntfs_write_bh+0x61c/0x740 fs/ntfs3/fsntfs.c:1481
 mi_write+0xc4/0x1e0 fs/ntfs3/record.c:388
 ni_write_inode+0x10a3/0x2920 fs/ntfs3/frecord.c:3372
 ntfs_set_state+0x3fb/0x6a0 fs/ntfs3/fsntfs.c:991
 ntfs_sync_fs+0x387/0x4f0 fs/ntfs3/super.c:768
 sync_filesystem+0x10d/0x290 fs/sync.c:56
 generic_shutdown_super+0x7e/0x3d0 fs/super.c:621
 kill_block_super+0x3b/0x90 fs/super.c:1676
 ntfs3_kill_sb+0x3f/0xf0 fs/ntfs3/super.c:1798
 deactivate_locked_super+0xbe/0x1a0 fs/super.c:473
 deactivate_super+0xde/0x100 fs/super.c:506
 cleanup_mnt+0x222/0x450 fs/namespace.c:1267
 task_work_run+0x14e/0x250 kernel/task_work.c:180
 exit_task_work include/linux/task_work.h:38 [inline]
 do_exit+0xa7d/0x2c10 kernel/exit.c:877
 do_group_exit+0xd3/0x2a0 kernel/exit.c:1026
 get_signal+0x2616/0x2710 kernel/signal.c:2911
 arch_do_signal_or_restart+0x90/0x7e0 arch/x86/kernel/signal.c:310
 exit_to_user_mode_loop kernel/entry/common.c:111 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:328 [inline]
 __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
 syscall_exit_to_user_mode+0x14a/0x2a0 kernel/entry/common.c:218
 do_syscall_64+0xdc/0x260 arch/x86/entry/common.c:89
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f934e27cee9
RSP: 002b:00007f934f0470c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000101
RAX: fffffffffffffff4 RBX: 00007f934e3ac050 RCX: 00007f934e27cee9
RDX: 00000000000026e1 RSI: 0000000020000180 RDI: ffffffffffffff9c
RBP: 00007f934e2c949e R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 000000000000006e R14: 00007f934e3ac050 R15: 00007ffd91a41578
 </TASK>

Showing all locks held in the system:
4 locks held by kworker/u8:1/12:
 #0: ffff888075976948 ((wq_completion)loop1){+.+.}-{0:0}, at: process_one_work+0x12bf/0x1b60 kernel/workqueue.c:3206
 #1: ffffc90000117d80 ((work_completion)(&lo->rootcg_work)){+.+.}-{0:0}, at: process_one_work+0x957/0x1b60 kernel/workqueue.c:3207
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: lo_write_bvec drivers/block/loop.c:246 [inline]
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: lo_write_simple drivers/block/loop.c:267 [inline]
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: do_req_filebacked drivers/block/loop.c:491 [inline]
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: loop_handle_cmd drivers/block/loop.c:1907 [inline]
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: loop_process_work+0x1577/0x20c0 drivers/block/loop.c:1942
 #3: ffff88802cc66540 (&sb->s_type->i_mutex_key#12){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:791 [inline]
 #3: ffff88802cc66540 (&sb->s_type->i_mutex_key#12){+.+.}-{3:3}, at: shmem_file_write_iter+0x8c/0x140 mm/shmem.c:2909
1 lock held by khungtaskd/30:
 #0: ffffffff8dbb1760 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:329 [inline]
 #0: ffffffff8dbb1760 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:781 [inline]
 #0: ffffffff8dbb1760 (rcu_read_lock){....}-{1:2}, at: debug_show_all_locks+0x75/0x340 kernel/locking/lockdep.c:6614
1 lock held by kswapd0/89:
4 locks held by kswapd1/90:
2 locks held by kworker/u8:7/2396:
4 locks held by kworker/u8:9/2869:
4 locks held by kworker/u8:10/2874:
 #0: ffff888075976948 ((wq_completion)loop1){+.+.}-{0:0}, at: process_one_work+0x12bf/0x1b60 kernel/workqueue.c:3206
 #1: ffffc9000aa17d80 ((work_completion)(&worker->work)){+.+.}-{0:0}, at: process_one_work+0x957/0x1b60 kernel/workqueue.c:3207
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: lo_write_bvec drivers/block/loop.c:246 [inline]
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: lo_write_simple drivers/block/loop.c:267 [inline]
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: do_req_filebacked drivers/block/loop.c:491 [inline]
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: loop_handle_cmd drivers/block/loop.c:1907 [inline]
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: loop_process_work+0x1577/0x20c0 drivers/block/loop.c:1942
 #3: ffff88802cc66540 (&sb->s_type->i_mutex_key#12){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:791 [inline]
 #3: ffff88802cc66540 (&sb->s_type->i_mutex_key#12){+.+.}-{3:3}, at: shmem_file_write_iter+0x8c/0x140 mm/shmem.c:2909
4 locks held by kworker/u8:11/2890:
 #0: ffff888075fe5148 ((wq_completion)loop2){+.+.}-{0:0}, at: process_one_work+0x12bf/0x1b60 kernel/workqueue.c:3206
 #1: ffffc9000aa27d80 ((work_completion)(&worker->work)){+.+.}-{0:0}, at: process_one_work+0x957/0x1b60 kernel/workqueue.c:3207
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: lo_write_bvec drivers/block/loop.c:246 [inline]
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: lo_write_simple drivers/block/loop.c:267 [inline]
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: do_req_filebacked drivers/block/loop.c:491 [inline]
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: loop_handle_cmd drivers/block/loop.c:1907 [inline]
 #2: ffff8880162f2420 (sb_writers#6){.+.+}-{0:0}, at: loop_process_work+0x1577/0x20c0 drivers/block/loop.c:1942
 #3: ffff888023c123a0 (&sb->s_type->i_mutex_key#12){+.+.}-{3:3}, at: inode_lock include/linux/fs.h:791 [inline]
 #3: ffff888023c123a0 (&sb->s_type->i_mutex_key#12){+.+.}-{3:3}, at: shmem_file_write_iter+0x8c/0x140 mm/shmem.c:2909
1 lock held by jbd2/sda1-8/4498:
2 locks held by dhcpcd/4748:
 #0: ffff88802ce80b18 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:163 [inline]
 #0: ffff88802ce80b18 (&mm->mmap_lock){++++}-{3:3}, at: get_mmap_lock_carefully mm/memory.c:5715 [inline]
 #0: ffff88802ce80b18 (&mm->mmap_lock){++++}-{3:3}, at: lock_mm_and_find_vma+0x35/0x6a0 mm/memory.c:5775
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3856 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3881 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath mm/page_alloc.c:4287 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_noprof+0xaae/0x2460 mm/page_alloc.c:4673
2 locks held by getty/4842:
 #0: ffff88802ace90a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002f062f0 (&ldata->atomic_read_lock){+.+.}-{3:3}, at: n_tty_read+0xfc8/0x1490 drivers/tty/n_tty.c:2201
2 locks held by syz-fuzzer/5081:
 #0: ffff88807e3769a0 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 #0: ffff88807e3769a0 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_fault+0x5c2/0x2610 mm/filemap.c:3320
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3856 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3881 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath mm/page_alloc.c:4287 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_noprof+0xaae/0x2460 mm/page_alloc.c:4673
2 locks held by syz-fuzzer/5082:
 #0: ffff888076568b18 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:163 [inline]
 #0: ffff888076568b18 (&mm->mmap_lock){++++}-{3:3}, at: get_mmap_lock_carefully mm/memory.c:5715 [inline]
 #0: ffff888076568b18 (&mm->mmap_lock){++++}-{3:3}, at: lock_mm_and_find_vma+0x35/0x6a0 mm/memory.c:5775
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3856 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3881 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath mm/page_alloc.c:4287 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_noprof+0xaae/0x2460 mm/page_alloc.c:4673
2 locks held by syz-fuzzer/5095:
 #0: ffff88807e3769a0 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_invalidate_lock_shared include/linux/fs.h:846 [inline]
 #0: ffff88807e3769a0 (mapping.invalidate_lock){++++}-{3:3}, at: filemap_fault+0x5c2/0x2610 mm/filemap.c:3320
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3856 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3881 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath mm/page_alloc.c:4287 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_noprof+0xaae/0x2460 mm/page_alloc.c:4673
5 locks held by kworker/u9:3/5118:
4 locks held by kworker/u9:5/5124:
2 locks held by kworker/1:4/5162:
 #0: ffff888015480948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x12bf/0x1b60 kernel/workqueue.c:3206
 #1: ffffc9000480fd80 ((work_completion)(&(&vi->refill)->work)){+.+.}-{0:0}, at: process_one_work+0x957/0x1b60 kernel/workqueue.c:3207
2 locks held by kworker/1:6/5195:
2 locks held by syz-executor.1/10755:
 #0: ffff8880248020e0 (&type->s_umount_key#85){++++}-{3:3}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff8880248020e0 (&type->s_umount_key#85){++++}-{3:3}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff8880248020e0 (&type->s_umount_key#85){++++}-{3:3}, at: deactivate_super+0xd6/0x100 fs/super.c:505
 #1: ffff88805c0bf700 (&ni->ni_lock#2){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1143 [inline]
 #1: ffff88805c0bf700 (&ni->ni_lock#2){+.+.}-{3:3}, at: ni_write_inode+0x24a/0x2920 fs/ntfs3/frecord.c:3265
3 locks held by syz-executor.1/10899:
 #0: ffff888069244ff0 (&tsk->futex_exit_mutex){+.+.}-{3:3}, at: futex_cleanup_begin kernel/futex/core.c:1091 [inline]
 #0: ffff888069244ff0 (&tsk->futex_exit_mutex){+.+.}-{3:3}, at: futex_exit_release+0x2a/0x220 kernel/futex/core.c:1143
 #1: ffff88801c297398 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:163 [inline]
 #1: ffff88801c297398 (&mm->mmap_lock){++++}-{3:3}, at: get_mmap_lock_carefully mm/memory.c:5715 [inline]
 #1: ffff88801c297398 (&mm->mmap_lock){++++}-{3:3}, at: lock_mm_and_find_vma+0x35/0x6a0 mm/memory.c:5775
 #2: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3856 [inline]
 #2: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3881 [inline]
 #2: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath mm/page_alloc.c:4287 [inline]
 #2: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_noprof+0xaae/0x2460 mm/page_alloc.c:4673
2 locks held by syz-executor.2/10906:
 #0: ffff8881961940e0 (&type->s_umount_key#85){++++}-{3:3}, at: __super_lock fs/super.c:56 [inline]
 #0: ffff8881961940e0 (&type->s_umount_key#85){++++}-{3:3}, at: __super_lock_excl fs/super.c:71 [inline]
 #0: ffff8881961940e0 (&type->s_umount_key#85){++++}-{3:3}, at: deactivate_super+0xd6/0x100 fs/super.c:505
 #1: ffff88805c0b8fc0 (&ni->ni_lock#2){+.+.}-{3:3}, at: ni_trylock fs/ntfs3/ntfs_fs.h:1143 [inline]
 #1: ffff88805c0b8fc0 (&ni->ni_lock#2){+.+.}-{3:3}, at: ni_write_inode+0x24a/0x2920 fs/ntfs3/frecord.c:3265
2 locks held by syz-executor.2/11065:
 #0: ffff8880251f1e18 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:163 [inline]
 #0: ffff8880251f1e18 (&mm->mmap_lock){++++}-{3:3}, at: get_mmap_lock_carefully mm/memory.c:5715 [inline]
 #0: ffff8880251f1e18 (&mm->mmap_lock){++++}-{3:3}, at: lock_mm_and_find_vma+0x35/0x6a0 mm/memory.c:5775
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3856 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3881 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath mm/page_alloc.c:4287 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_noprof+0xaae/0x2460 mm/page_alloc.c:4673
2 locks held by syz-executor.1/11107:
 #0: ffff888077312798 (&mm->mmap_lock){++++}-{3:3}, at: mmap_read_trylock include/linux/mmap_lock.h:163 [inline]
 #0: ffff888077312798 (&mm->mmap_lock){++++}-{3:3}, at: get_mmap_lock_carefully mm/memory.c:5715 [inline]
 #0: ffff888077312798 (&mm->mmap_lock){++++}-{3:3}, at: lock_mm_and_find_vma+0x35/0x6a0 mm/memory.c:5775
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __perform_reclaim mm/page_alloc.c:3856 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_direct_reclaim mm/page_alloc.c:3881 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_slowpath mm/page_alloc.c:4287 [inline]
 #1: ffffffff8dd33860 (fs_reclaim){+.+.}-{0:0}, at: __alloc_pages_noprof+0xaae/0x2460 mm/page_alloc.c:4673

=============================================

NMI backtrace for cpu 0
CPU: 0 PID: 30 Comm: khungtaskd Not tainted 6.9.0-syzkaller-10323-g8f6a15f095a6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:114
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:223 [inline]
 watchdog+0xf86/0x1240 kernel/hung_task.c:379
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 5162 Comm: kworker/1:4 Not tainted 6.9.0-syzkaller-10323-g8f6a15f095a6 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024
Workqueue: events refill_work
RIP: 0010:preempt_count_add+0x1f/0x150 kernel/sched/core.c:5873
Code: 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 48 c7 c0 a0 fb b4 94 55 48 ba 00 00 00 00 00 fc ff df 48 89 c1 53 83 e0 07 89 fb <48> c1 e9 03 83 c0 03 65 01 3d db f8 a5 7e 0f b6 14 11 38 d0 7c 08
RSP: 0000:ffffc9000480f3b8 EFLAGS: 00000046
RAX: 0000000000000000 RBX: 0000000000000001 RCX: ffffffff94b4fba0
RDX: dffffc0000000000 RSI: ffffffff8b8fee40 RDI: 0000000000000001
RBP: 0000000000000246 R08: 0000000000000000 R09: fffffbfff1fc88ba
R10: ffffffff8fe445d7 R11: 0000000000000004 R12: 0000000000000000
R13: ffff888011ba8000 R14: 0000000000000004 R15: ffff88823fff7000
FS:  0000000000000000(0000) GS:ffff8880b9300000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f934f0479a0 CR3: 000000000d97a000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:109 [inline]
 _raw_spin_lock_irqsave+0x1e/0x60 kernel/locking/spinlock.c:162
 lru_gen_rotate_memcg+0x65/0xc20 mm/vmscan.c:4127
 shrink_many mm/vmscan.c:4830 [inline]
 lru_gen_shrink_node mm/vmscan.c:4951 [inline]
 shrink_node+0x245b/0x39b0 mm/vmscan.c:5910
 shrink_zones mm/vmscan.c:6168 [inline]
 do_try_to_free_pages+0x35f/0x1940 mm/vmscan.c:6230
 try_to_free_pages+0x2b6/0x720 mm/vmscan.c:6465
 __perform_reclaim mm/page_alloc.c:3859 [inline]
 __alloc_pages_direct_reclaim mm/page_alloc.c:3881 [inline]
 __alloc_pages_slowpath mm/page_alloc.c:4287 [inline]
 __alloc_pages_noprof+0xb38/0x2460 mm/page_alloc.c:4673
 alloc_pages_mpol_noprof+0x275/0x610 mm/mempolicy.c:2265
 skb_page_frag_refill+0x25a/0x350 net/core/sock.c:2929
 virtnet_rq_alloc+0x33/0x8e0 drivers/net/virtio_net.c:882
 add_recvbuf_mergeable drivers/net/virtio_net.c:2110 [inline]
 try_fill_recv+0x783/0x1840 drivers/net/virtio_net.c:2155
 refill_work+0x135/0x200 drivers/net/virtio_net.c:2233
 process_one_work+0x9fb/0x1b60 kernel/workqueue.c:3231
 process_scheduled_works kernel/workqueue.c:3312 [inline]
 worker_thread+0x6c8/0xf70 kernel/workqueue.c:3393
 kthread+0x2c1/0x3a0 kernel/kthread.c:389
 ret_from_fork+0x45/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>

Crashes (6):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/05/21 15:06 upstream 8f6a15f095a6 c0f1611a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __sync_dirty_buffer
2024/05/19 09:17 upstream 0450d2083be6 c0f1611a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __sync_dirty_buffer
2024/04/29 05:52 upstream e67572cd2204 07b455f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in __sync_dirty_buffer
2024/04/24 20:40 upstream 9d1ddab261f3 8bdc0f22 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in __sync_dirty_buffer
2024/04/21 21:37 upstream 3b68086599f8 af24b050 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in __sync_dirty_buffer
2024/07/11 06:41 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci ef445d1539dd c699c2eb .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in __sync_dirty_buffer
* Struck through repros no longer work on HEAD.