ci2 starts bisection 2023-04-03 08:10:12.393859993 +0000 UTC m=+9024.487017793 bisecting fixing commit since f9ff5644bcc04221bae56f922122f2b7f5d24d62 building syzkaller on 05494336991504e3c6137b89eeddd492e17af6b6 ensuring issue is reproducible on original commit f9ff5644bcc04221bae56f922122f2b7f5d24d62 testing commit f9ff5644bcc04221bae56f922122f2b7f5d24d62 gcc compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2 kernel signature: c6255aa8b1e9fad06c497bdf6b491d8b45b9c83faa7d3be616c03d787ac2ff18 all runs: crashed: general protection fault in __d_instantiate testing current HEAD 7e364e56293bb98cae1b55fd835f5991c4e96e7d testing commit 7e364e56293bb98cae1b55fd835f5991c4e96e7d gcc compiler: Debian clang version 15.0.7, GNU ld (GNU Binutils for Debian) 2.35.2 kernel signature: 07613e58931e4961296b88cb5b074894ad2fc7668953e2e8757b44550e9d24f3 run #0: crashed: KASAN: out-of-bounds Write in end_buffer_read_sync run #1: crashed: KASAN: out-of-bounds Write in end_buffer_read_sync run #2: crashed: KASAN: out-of-bounds Write in end_buffer_read_sync run #3: crashed: KASAN: stack-out-of-bounds Write in end_buffer_read_sync run #4: OK run #5: OK run #6: OK run #7: OK run #8: OK run #9: OK reproducer seems to be flaky Reproducer flagged being flaky revisions tested: 2, total time: 45m29.98945048s (build: 31m46.018305475s, test: 12m9.932041447s) the crash still happens on HEAD commit msg: Linux 6.3-rc5 crash: KASAN: stack-out-of-bounds Write in end_buffer_read_sync ================================================================== BUG: KASAN: stack-out-of-bounds in end_buffer_read_sync+0x89/0x90 Write of size 4 at addr ffffc90004f1f820 by task ksoftirqd/0/15 CPU: 0 PID: 15 Comm: ksoftirqd/0 Not tainted 6.3.0-rc5-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/02/2023 Call Trace: dump_stack_lvl+0x12e/0x1d0 print_report+0x163/0x510 kasan_report+0x108/0x140 kasan_check_range+0x283/0x290 end_buffer_read_sync+0x89/0x90 end_bio_bh_io_sync+0x8d/0xe0 blk_update_request+0x3a3/0x1000 blk_mq_end_request+0x39/0x60 blk_done_softirq+0x83/0x120 __do_softirq+0x2ab/0x8ea run_ksoftirqd+0xa6/0x100 smpboot_thread_fn+0x534/0x890 kthread+0x232/0x2b0 ret_from_fork+0x1f/0x30 The buggy address belongs to the virtual mapping at [ffffc90004f18000, ffffc90004f21000) created by: copy_process+0x3d5/0x3b60 The buggy address belongs to the physical page: page:ffffea0000a498c0 refcount:1 mapcount:0 mapping:0000000000000000 index:0x0 pfn:0x29263 memcg:ffff88807140df82 flags: 0xfff00000000000(node=0|zone=1|lastcpupid=0x7ff) raw: 00fff00000000000 0000000000000000 dead000000000122 0000000000000000 raw: 0000000000000000 0000000000000000 00000001ffffffff ffff88807140df82 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 0, migratetype Unmovable, gfp_mask 0x102dc2(GFP_HIGHUSER|__GFP_NOWARN|__GFP_ZERO), pid 5516, tgid 5516 (syz-executor.0), ts 324223790434, free_ts 323698919431 get_page_from_freelist+0x31e9/0x3360 __alloc_pages+0x255/0x670 __vmalloc_node_range+0x7d1/0x1070 dup_task_struct+0x575/0x690 copy_process+0x3d5/0x3b60 kernel_clone+0x17d/0x5d0 __x64_sys_clone+0x228/0x290 do_syscall_64+0x41/0xc0 entry_SYSCALL_64_after_hwframe+0x63/0xcd page last free stack trace: free_unref_page_prepare+0xe2f/0xe70 free_unref_page_list+0x596/0x830 release_pages+0x1a07/0x1bc0 tlb_flush_mmu+0xe9/0x1e0 tlb_finish_mmu+0xb6/0x1c0 exit_mmap+0x267/0x670 __mmput+0xcb/0x300 exit_mm+0x1c4/0x280 do_exit+0x4d0/0x1cf0 do_group_exit+0x1b9/0x280 get_signal+0x11d1/0x1280 arch_do_signal_or_restart+0x7f/0x660 exit_to_user_mode_loop+0x6a/0xf0 exit_to_user_mode_prepare+0xb1/0x140 syscall_exit_to_user_mode+0x54/0x270 do_syscall_64+0x4d/0xc0 Memory state around the buggy address: ffffc90004f1f700: 00 00 00 00 00 00 00 00 00 00 00 00 00 f2 f2 f2 ffffc90004f1f780: f2 f2 f2 f2 f2 f2 00 00 00 00 00 00 00 00 00 00 >ffffc90004f1f800: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ^ ffffc90004f1f880: 00 00 00 f2 f2 f2 f2 f2 f2 f2 f2 f2 00 00 00 00 ffffc90004f1f900: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ==================================================================