syzbot


INFO: rcu detected stall in kjournald2 (2)

Status: upstream: reported C repro on 2024/10/03 07:16
Subsystems: mm
[Documentation on labels]
Reported-by: syzbot+14c6ac6811273526cfa5@syzkaller.appspotmail.com
First crash: 227d, last: 9h45m
Cause bisection: failed (error log, bisect log)
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [mm?] INFO: rcu detected stall in kjournald2 (2) 0 (1) 2024/10/03 07:16
Similar bugs (7)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in kjournald2 mm 1 934d 934d 0/28 auto-closed as invalid on 2022/07/30 15:32
linux-6.1 INFO: rcu detected stall in kjournald2 1 358d 358d 0/3 auto-obsoleted due to no activity on 2024/03/07 10:54
upstream BUG: soft lockup in kjournald2 (2) mm 6 1074d 1160d 0/28 closed as dup on 2021/09/17 07:37
android-5-15 BUG: soft lockup in kjournald2 1 159d 159d 0/2 auto-obsoleted due to no activity on 2024/09/12 19:40
upstream BUG: soft lockup in kjournald2 mm 28 1183d 1334d 0/28 closed as dup on 2021/03/27 07:12
android-6-1 BUG: soft lockup in kjournald2 (2) 1 150d 150d 0/2 auto-obsoleted due to no activity on 2024/09/21 17:37
android-6-1 BUG: soft lockup in kjournald2 1 293d 293d 0/2 auto-obsoleted due to no activity on 2024/05/02 01:34

Sample crash report:
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P5244/1:b..l P4648/1:b..l
rcu: 	(detected by 0, t=10503 jiffies, g=5405, q=180 ncpus=2)
task:jbd2/sda1-8     state:R  running task     stack:24912 pid:4648  tgid:4648  ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5315 [inline]
 __schedule+0x1895/0x4b30 kernel/sched/core.c:6675
 preempt_schedule_irq+0xfb/0x1c0 kernel/sched/core.c:6997
 irqentry_exit+0x5e/0x90 kernel/entry/common.c:354
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:lock_is_held_type+0x13b/0x190
Code: 75 44 48 c7 04 24 00 00 00 00 9c 8f 04 24 f7 04 24 00 02 00 00 75 4c 41 f7 c4 00 02 00 00 74 01 fb 65 48 8b 04 25 28 00 00 00 <48> 3b 44 24 08 75 42 89 d8 48 83 c4 10 5b 41 5c 41 5d 41 5e 41 5f
RSP: 0018:ffffc9000e7ef418 EFLAGS: 00000206
RAX: bf2c67f414204300 RBX: 0000000000000001 RCX: 0000000080000000
RDX: 0000000000000000 RSI: ffffffff8c0adc40 RDI: ffffffff8c60fba0
RBP: 0000000000000001 R08: ffffffff820b564e R09: 1ffffffff2858b00
R10: dffffc0000000000 R11: fffffbfff2858b01 R12: 0000000000000246
R13: ffff88803138bc00 R14: 00000000ffffffff R15: ffffffff8e937de0
 lookup_page_ext mm/page_ext.c:254 [inline]
 page_ext_get+0x192/0x2a0 mm/page_ext.c:526
 __page_table_check_zero+0xb1/0x350 mm/page_table_check.c:148
 page_table_check_free include/linux/page_table_check.h:41 [inline]
 free_pages_prepare mm/page_alloc.c:1109 [inline]
 free_unref_page+0xd0f/0xf20 mm/page_alloc.c:2638
 discard_slab mm/slub.c:2677 [inline]
 __put_partials+0xeb/0x130 mm/slub.c:3145
 put_cpu_partial+0x17c/0x250 mm/slub.c:3220
 __slab_free+0x2ea/0x3d0 mm/slub.c:4449
 qlink_free mm/kasan/quarantine.c:163 [inline]
 qlist_free_all+0x9a/0x140 mm/kasan/quarantine.c:179
 kasan_quarantine_reduce+0x14f/0x170 mm/kasan/quarantine.c:286
 __kasan_slab_alloc+0x23/0x80 mm/kasan/common.c:329
 kasan_slab_alloc include/linux/kasan.h:247 [inline]
 slab_post_alloc_hook mm/slub.c:4085 [inline]
 slab_alloc_node mm/slub.c:4134 [inline]
 kmem_cache_alloc_noprof+0x135/0x2a0 mm/slub.c:4141
 alloc_buffer_head+0x2a/0x290 fs/buffer.c:3020
 jbd2_journal_write_metadata_buffer+0xc2/0xa60 fs/jbd2/journal.c:349
 jbd2_journal_commit_transaction+0x1b36/0x67e0 fs/jbd2/commit.c:663
 kjournald2+0x41c/0x7b0 fs/jbd2/journal.c:201
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
task:syz-executor155 state:R  running task     stack:19888 pid:5244  tgid:5244  ppid:5242   flags:0x00000002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5315 [inline]
 __schedule+0x1895/0x4b30 kernel/sched/core.c:6675
 preempt_schedule_irq+0xfb/0x1c0 kernel/sched/core.c:6997
 irqentry_exit+0x5e/0x90 kernel/entry/common.c:354
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:lock_acquire+0x2a9/0x550 kernel/locking/lockdep.c:5829
Code: 66 43 c7 44 25 15 00 00 43 c6 44 25 17 00 65 48 8b 04 25 28 00 00 00 48 3b 84 24 00 01 00 00 0f 85 95 02 00 00 48 8d 65 d8 5b <41> 5c 41 5d 41 5e 41 5f 5d c3 cc cc cc cc 65 8b 05 a2 42 92 7e 85
RSP: 0018:ffffc90003ed7638 EFLAGS: 00000246
RAX: d39743979daa1500 RBX: ffffea0001af2840 RCX: d39743979daa1500
RDX: dffffc0000000000 RSI: ffffffff8c0adc40 RDI: ffffffff8c60fba0
RBP: ffffc90003ed7658 R08: ffffffff942c5807 R09: 1ffffffff2858b00
R10: dffffc0000000000 R11: fffffbfff2858b01 R12: 1ffff920007daea8
R13: dffffc0000000000 R14: ffffc90003ed7560 R15: 0000000000000246
 rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 rcu_read_lock include/linux/rcupdate.h:849 [inline]
 page_ext_get+0x3d/0x2a0 mm/page_ext.c:525
 __reset_page_owner+0x30/0x430 mm/page_owner.c:290
 reset_page_owner include/linux/page_owner.h:25 [inline]
 free_pages_prepare mm/page_alloc.c:1108 [inline]
 free_unref_page+0xcfb/0xf20 mm/page_alloc.c:2638
 discard_slab mm/slub.c:2677 [inline]
 __put_partials+0xeb/0x130 mm/slub.c:3145
 put_cpu_partial+0x17c/0x250 mm/slub.c:3220
 __slab_free+0x2ea/0x3d0 mm/slub.c:4449
 qlink_free mm/kasan/quarantine.c:163 [inline]
 qlist_free_all+0x9a/0x140 mm/kasan/quarantine.c:179
 kasan_quarantine_reduce+0x14f/0x170 mm/kasan/quarantine.c:286
 __kasan_kmalloc+0x23/0xb0 mm/kasan/common.c:385
 kasan_kmalloc include/linux/kasan.h:257 [inline]
 __do_kmalloc_node mm/slub.c:4264 [inline]
 __kmalloc_noprof+0x1fc/0x400 mm/slub.c:4276
 kmalloc_noprof include/linux/slab.h:882 [inline]
 tomoyo_realpath_from_path+0xcf/0x5e0 security/tomoyo/realpath.c:251
 tomoyo_get_realpath security/tomoyo/file.c:151 [inline]
 tomoyo_path_perm+0x2b7/0x740 security/tomoyo/file.c:822
 security_inode_getattr+0x130/0x330 security/security.c:2371
 vfs_getattr+0x45/0x430 fs/stat.c:204
 vfs_fstat fs/stat.c:229 [inline]
 vfs_fstatat+0xe4/0x190 fs/stat.c:338
 __do_sys_newfstatat fs/stat.c:505 [inline]
 __se_sys_newfstatat fs/stat.c:499 [inline]
 __x64_sys_newfstatat+0x11d/0x1a0 fs/stat.c:499
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f6cf87c2b2a
RSP: 002b:00007ffd51ccbb38 EFLAGS: 00000206 ORIG_RAX: 0000000000000106
RAX: ffffffffffffffda RBX: 0000000000000003 RCX: 00007f6cf87c2b2a
RDX: 00007ffd51ccbb40 RSI: 00007f6cf882c32a RDI: 0000000000000003
RBP: 00007ffd51ccbb40 R08: 0000000000000000 R09: 7fffffffffffffff
R10: 0000000000001000 R11: 0000000000000206 R12: 00007ffd51cccd20
R13: 00007ffd51cccd20 R14: 00007ffd51cccd60 R15: 0000000000000005
 </TASK>
rcu: rcu_preempt kthread starved for 9024 jiffies! g5405 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:25520 pid:17    tgid:17    ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5315 [inline]
 __schedule+0x1895/0x4b30 kernel/sched/core.c:6675
 __schedule_loop kernel/sched/core.c:6752 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6767
 schedule_timeout+0x1be/0x310 kernel/time/timer.c:2615
 rcu_gp_fqs_loop+0x2df/0x1330 kernel/rcu/tree.c:2045
 rcu_gp_kthread+0xa7/0x3b0 kernel/rcu/tree.c:2247
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 24 Comm: ksoftirqd/1 Not tainted 6.12.0-rc1-syzkaller-00349-g8f602276d390 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024
RIP: 0010:native_safe_halt arch/x86/include/asm/irqflags.h:48 [inline]
RIP: 0010:arch_safe_halt arch/x86/include/asm/irqflags.h:106 [inline]
RIP: 0010:kvm_wait+0x250/0x2c0 arch/x86/kernel/kvm.c:1060
Code: 3b 45 0f b6 f6 44 89 ff 44 89 f6 e8 da 2f 54 00 e8 b5 f3 5b 00 45 38 f7 75 15 66 90 e8 49 2f 54 00 0f 00 2d 52 8d c7 0a fb f4 <e9> 50 fe ff ff e8 36 2f 54 00 fb e9 45 fe ff ff 89 d9 80 e1 07 38
RSP: 0018:ffffc900001e64e0 EFLAGS: 00000246
RAX: ffffffff8140a727 RBX: ffff8880277ed3e8 RCX: ffff88801d2f0000
RDX: 0000000000000100 RSI: ffffffff8c0acac0 RDI: ffffffff8c60fba0
RBP: ffffc900001e65b0 R08: ffffffff942c58df R09: 1ffffffff2858b1b
R10: dffffc0000000000 R11: fffffbfff2858b1c R12: 1ffff9200003cca0
R13: dffffc0000000000 R14: 0000000000000003 R15: 0000000000000003
FS:  0000000000000000(0000) GS:ffff8880b8700000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffd51ccbc0c CR3: 0000000030758000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 pv_wait arch/x86/include/asm/paravirt.h:596 [inline]
 pv_wait_head_or_lock kernel/locking/qspinlock_paravirt.h:466 [inline]
 __pv_queued_spin_lock_slowpath+0x8d0/0xdb0 kernel/locking/qspinlock.c:508
 pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:584 [inline]
 queued_spin_lock_slowpath+0x42/0x50 arch/x86/include/asm/qspinlock.h:51
 queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
 do_raw_spin_lock+0x272/0x370 kernel/locking/spinlock_debug.c:116
 spin_lock include/linux/spinlock.h:351 [inline]
 br_multicast_add_group net/bridge/br_multicast.c:1568 [inline]
 br_ip6_multicast_add_group net/bridge/br_multicast.c:1621 [inline]
 br_ip6_multicast_mld2_report net/bridge/br_multicast.c:2976 [inline]
 br_multicast_ipv6_rcv net/bridge/br_multicast.c:3914 [inline]
 br_multicast_rcv+0x3f4c/0x8180 net/bridge/br_multicast.c:3972
 br_handle_frame_finish+0x9b6/0x1fe0 net/bridge/br_input.c:153
 br_nf_hook_thresh+0x472/0x590
 br_nf_pre_routing_finish_ipv6+0xaa0/0xdd0
 NF_HOOK include/linux/netfilter.h:314 [inline]
 br_nf_pre_routing_ipv6+0x379/0x770 net/bridge/br_netfilter_ipv6.c:184
 nf_hook_entry_hookfn include/linux/netfilter.h:154 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:277 [inline]
 br_handle_frame+0x9fd/0x1530 net/bridge/br_input.c:424
 __netif_receive_skb_core+0x13e8/0x4570 net/core/dev.c:5560
 __netif_receive_skb_one_core net/core/dev.c:5664 [inline]
 __netif_receive_skb+0x12f/0x650 net/core/dev.c:5779
 process_backlog+0x662/0x15b0 net/core/dev.c:6111
 __napi_poll+0xcb/0x490 net/core/dev.c:6775
 napi_poll net/core/dev.c:6844 [inline]
 net_rx_action+0x89b/0x1240 net/core/dev.c:6966
 handle_softirqs+0x2c5/0x980 kernel/softirq.c:554
 run_ksoftirqd+0xca/0x130 kernel/softirq.c:927
 smpboot_thread_fn+0x544/0xa30 kernel/smpboot.c:164
 kthread+0x2f0/0x390 kernel/kthread.c:389
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:147
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
INFO: NMI handler (nmi_cpu_backtrace_handler) took too long to run: 2.324 msecs
net_ratelimit: 18753 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:5a:0e:83:c6:5a:9f, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:5a:0e:83:c6:5a:9f, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
net_ratelimit: 27566 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:5a:0e:83:c6:5a:9f, vlan:0)

Crashes (32):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2024/10/06 20:36 upstream 8f602276d390 d7906eff .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in kjournald2
2024/09/29 07:03 net d505d3593b52 ba29ff75 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in kjournald2
2024/11/20 17:39 upstream bf9aa14fc523 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in kjournald2
2024/11/19 14:42 upstream c6d64479d609 571351cb .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in kjournald2
2024/11/06 21:14 upstream 7758b206117d df3dc63b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in kjournald2
2024/11/06 06:49 upstream 2e1b3cc9d7f7 3a465482 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in kjournald2
2024/11/03 21:27 upstream a33ab3f94f51 f00eed24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in kjournald2
2024/11/02 03:58 upstream 6c52d4da1c74 f00eed24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in kjournald2
2024/11/01 15:30 upstream 6c52d4da1c74 f00eed24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in kjournald2
2024/10/26 22:40 upstream 850925a8133c 65e8686b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in kjournald2
2024/10/20 08:42 upstream 715ca9dd687f cd6fc0a3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in kjournald2
2024/10/11 04:04 upstream 1d227fcc7222 8fbfc0c8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: rcu detected stall in kjournald2
2024/10/02 13:39 upstream e32cde8d2bd7 ea2b66a6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce INFO: rcu detected stall in kjournald2
2024/07/05 19:33 upstream 661e504db04c 2a40360c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in kjournald2
2024/06/30 06:11 upstream 8282d5af7be8 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in kjournald2
2024/06/28 16:36 upstream 5bbd9b249880 b62c7d46 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in kjournald2
2024/06/17 20:23 upstream 2ccbdf43d5e7 1f11cfd7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in kjournald2
2024/05/26 23:55 upstream 6fbf71854e2d a10a183e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: rcu detected stall in kjournald2
2024/04/07 21:59 upstream fe46a7dd189e ca620dd8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: rcu detected stall in kjournald2
2024/09/26 13:02 upstream aa486552a110 0d19f247 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-386 INFO: rcu detected stall in kjournald2
2024/11/12 15:27 net 073d89808c06 75bb1b32 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in kjournald2
2024/11/09 12:41 net 55d42a0c3f9c 6b856513 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in kjournald2
2024/10/02 17:23 net c4a14f6d9d17 a4c7fd36 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: rcu detected stall in kjournald2
2024/11/20 21:12 net-next dd7207838d38 4fca1650 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: rcu detected stall in kjournald2
2024/11/17 19:59 net-next 38f83a57aa8e cfe3a04a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: rcu detected stall in kjournald2
2024/11/08 11:15 net-next 2696e451dfb0 179b040e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: rcu detected stall in kjournald2
2024/11/06 01:47 net-next ccb35037c48a 3a465482 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: rcu detected stall in kjournald2
2024/11/02 10:12 net-next dbb9a7ef3478 f00eed24 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: rcu detected stall in kjournald2
2024/10/28 19:16 net-next 6d858708d465 9efb3cc7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: rcu detected stall in kjournald2
2024/10/09 05:59 https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/usb.git usb-testing 4a9fe2a8ac53 402f1df0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-usb INFO: rcu detected stall in kjournald2
2024/06/22 08:17 linux-next f76698bd9a8c edc5149a .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in kjournald2
2024/05/04 02:21 linux-next 9221b2819b8a 610f2a54 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: rcu detected stall in kjournald2
* Struck through repros no longer work on HEAD.