syzbot


WARNING in rmqueue

Status: fixed on 2023/06/08 14:41
Subsystems: erofs
[Documentation on labels]
Reported-by: syzbot+aafb3f37cfeb6534c4ac@syzkaller.appspotmail.com
Fix commit: cc4efd3dd2ac erofs: stop parsing non-compact HEAD index if clusterofs is invalid
First crash: 476d, last: 352d
Cause bisection: failed (error log, bisect log)
  
Discussions (2)
Title Replies (including bot) Last reply
[PATCH] erofs: stop parsing non-compact HEAD index if clusterofs is invalid 2 (2) 2023/04/16 14:33
[syzbot] WARNING in rmqueue 3 (8) 2023/04/11 08:13
Last patch testing requests (4)
Created Duration User Patch Repo Result
2023/04/11 07:45 26m hsiangkao@linux.alibaba.com https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs.git/ dev-test OK log
2023/04/11 07:43 0m hsiangkao@linux.alibaba.com https://git.kernel.org/pub/scm/linux/kernel/git/xiang/erofs.git/ dev-next error OK
2023/04/10 09:03 22m hsiangkao@linux.alibaba.com https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/ v6.3-rc6 report log
2022/11/29 09:41 19m hdanton@sina.com patch https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git b7b275e60bcd report log
Fix bisection attempts (3)
Created Duration User Patch Repo Result
2023/04/01 02:14 44m bisect fix upstream job log (0) log
2023/03/02 00:56 47m bisect fix upstream job log (0) log
2023/01/29 18:00 1h14m bisect fix upstream job log (0) log

Sample crash report:
------------[ cut here ]------------
WARNING: CPU: 0 PID: 48 at mm/page_alloc.c:3837 __count_numa_events include/linux/vmstat.h:249 [inline]
WARNING: CPU: 0 PID: 48 at mm/page_alloc.c:3837 zone_statistics mm/page_alloc.c:3692 [inline]
WARNING: CPU: 0 PID: 48 at mm/page_alloc.c:3837 rmqueue_buddy mm/page_alloc.c:3728 [inline]
WARNING: CPU: 0 PID: 48 at mm/page_alloc.c:3837 rmqueue+0x1d6b/0x1ed0 mm/page_alloc.c:3853
Modules linked in:
CPU: 0 PID: 48 Comm: kworker/u5:0 Not tainted 6.1.0-rc7-syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Workqueue: erofs_unzipd z_erofs_decompressqueue_work
RIP: 0010:rmqueue+0x1d6b/0x1ed0 mm/page_alloc.c:3837
Code: 48 8b 02 65 48 ff 40 20 49 83 f6 05 42 80 3c 2b 00 74 08 4c 89 e7 e8 a4 44 0b 00 49 8b 04 24 65 4a ff 44 f0 10 e9 2a fe ff ff <0f> 0b e9 29 e3 ff ff 48 89 df be 08 00 00 00 e8 31 46 0b 00 f0 41
RSP: 0018:ffffc90000b97260 EFLAGS: 00010202
RAX: f301f204f1f1f1f1 RBX: ffff88813fffae00 RCX: 000000000000adc2
RDX: 1ffff92000172e70 RSI: 1ffff92000172e70 RDI: ffff88813fffae00
RBP: ffffc90000b97420 R08: 0000000000000901 R09: 0000000000000009
R10: ffffed1027fff5b3 R11: 1ffff11027fff5b2 R12: ffff88813fffc310
R13: dffffc0000000000 R14: 0000000000000000 R15: ffff88813fffa700
FS:  0000000000000000(0000) GS:ffff8880b9800000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f7bec722f10 CR3: 000000004a430000 CR4: 00000000003506f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <TASK>
 get_page_from_freelist+0x4b6/0x7c0 mm/page_alloc.c:4288
 __alloc_pages+0x259/0x560 mm/page_alloc.c:5558
 vm_area_alloc_pages mm/vmalloc.c:2975 [inline]
 __vmalloc_area_node mm/vmalloc.c:3043 [inline]
 __vmalloc_node_range+0x8f4/0x1290 mm/vmalloc.c:3213
 kvmalloc_node+0x13e/0x180 mm/util.c:606
 kvmalloc include/linux/slab.h:706 [inline]
 kvmalloc_array include/linux/slab.h:724 [inline]
 kvcalloc include/linux/slab.h:729 [inline]
 z_erofs_decompress_pcluster fs/erofs/zdata.c:1049 [inline]
 z_erofs_decompress_queue+0x693/0x2c30 fs/erofs/zdata.c:1155
 z_erofs_decompressqueue_work+0x95/0xe0 fs/erofs/zdata.c:1167
 process_one_work+0x877/0xdb0 kernel/workqueue.c:2289
 worker_thread+0xb14/0x1330 kernel/workqueue.c:2436
 kthread+0x266/0x300 kernel/kthread.c:376
 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:306
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2022/11/28 18:30 upstream b7b275e60bcd 247de55b .config strace log report syz C [disk image] [vmlinux] [kernel image] [mounted in repro] ci2-upstream-fs WARNING in rmqueue
2022/11/28 18:13 upstream b7b275e60bcd 247de55b .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs WARNING in rmqueue
* Struck through repros no longer work on HEAD.