warning: checkpointing journal with EXT4_IOC_CHECKPOINT_FLAG_ZEROOUT can be slow ===================================================== WARNING: HARDIRQ-safe -> HARDIRQ-unsafe lock order detected 5.15.153-syzkaller #0 Not tainted ----------------------------------------------------- syz-executor.2/5276 [HC0[0]:SC0[2]:HE0:SE0] is trying to acquire: ffff888019b95820 (&htab->buckets[i].lock){+...}-{2:2}, at: sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937 and this task is already holding: ffff8880b9b39f18 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x56d/0xd00 which would create a new lock dependency: (&pool->lock){-.-.}-{2:2} -> (&htab->buckets[i].lock){+...}-{2:2} but this new dependency connects a HARDIRQ-irq-safe lock: (&pool->lock){-.-.}-{2:2} ... which became HARDIRQ-irq-safe at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 __queue_work+0x56d/0xd00 queue_work_on+0x14b/0x250 kernel/workqueue.c:1559 hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline] hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912 run_local_timers kernel/time/timer.c:1762 [inline] update_process_times+0xca/0x200 kernel/time/timer.c:1787 tick_periodic+0x197/0x210 kernel/time/tick-common.c:100 tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline] __sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102 sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638 lock_acquire+0x252/0x4f0 kernel/locking/lockdep.c:5627 down_write+0x97/0x170 kernel/locking/rwsem.c:1541 inode_lock include/linux/fs.h:789 [inline] start_creating+0xb9/0x200 fs/tracefs/inode.c:422 tracefs_create_file+0x9c/0x5d0 fs/tracefs/inode.c:493 trace_create_file+0x2e/0x60 kernel/trace/trace.c:8991 event_create_dir+0xa7b/0xdf0 kernel/trace/trace_events.c:2454 __trace_early_add_event_dirs+0x6e/0x1c0 kernel/trace/trace_events.c:3491 early_event_add_tracer+0x52/0x70 kernel/trace/trace_events.c:3664 event_trace_init+0x100/0x180 kernel/trace/trace_events.c:3824 tracer_init_tracefs+0x153/0x2a2 kernel/trace/trace.c:9897 do_one_initcall+0x22b/0x7a0 init/main.c:1300 do_initcall_level+0x157/0x207 init/main.c:1373 do_initcalls+0x49/0x86 init/main.c:1389 kernel_init_freeable+0x425/0x5b5 init/main.c:1613 kernel_init+0x19/0x290 init/main.c:1504 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298 to a HARDIRQ-irq-unsafe lock: (&htab->buckets[i].lock){+...}-{2:2} ... which became HARDIRQ-irq-unsafe at: ... lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154 process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310 worker_thread+0xaca/0x1280 kernel/workqueue.c:2457 kthread+0x3f6/0x4f0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298 other info that might help us debug this: Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&htab->buckets[i].lock); local_irq_disable(); lock(&pool->lock); lock(&htab->buckets[i].lock); lock(&pool->lock); *** DEADLOCK *** 5 locks held by syz-executor.2/5276: #0: ffff88814b864170 (&journal->j_barrier){+.+.}-{3:3}, at: jbd2_journal_lock_updates+0x2aa/0x370 fs/jbd2/transaction.c:905 #1: ffff88814b8643f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: jbd2_journal_flush+0x31c/0xc90 fs/jbd2/journal.c:2478 #2: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:311 #3: ffff8880b9b39f18 (&pool->lock){-.-.}-{2:2}, at: __queue_work+0x56d/0xd00 #4: ffffffff8c91f720 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire+0x5/0x30 include/linux/rcupdate.h:311 the dependencies between HARDIRQ-irq-safe lock and the holding lock: -> (&pool->lock){-.-.}-{2:2} { IN-HARDIRQ-W at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 __queue_work+0x56d/0xd00 queue_work_on+0x14b/0x250 kernel/workqueue.c:1559 hrtimer_switch_to_hres kernel/time/hrtimer.c:747 [inline] hrtimer_run_queues+0x14b/0x450 kernel/time/hrtimer.c:1912 run_local_timers kernel/time/timer.c:1762 [inline] update_process_times+0xca/0x200 kernel/time/timer.c:1787 tick_periodic+0x197/0x210 kernel/time/tick-common.c:100 tick_handle_periodic+0x46/0x150 kernel/time/tick-common.c:112 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1085 [inline] __sysvec_apic_timer_interrupt+0x139/0x470 arch/x86/kernel/apic/apic.c:1102 sysvec_apic_timer_interrupt+0x8c/0xb0 arch/x86/kernel/apic/apic.c:1096 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638 lock_acquire+0x252/0x4f0 kernel/locking/lockdep.c:5627 down_write+0x97/0x170 kernel/locking/rwsem.c:1541 inode_lock include/linux/fs.h:789 [inline] start_creating+0xb9/0x200 fs/tracefs/inode.c:422 tracefs_create_file+0x9c/0x5d0 fs/tracefs/inode.c:493 trace_create_file+0x2e/0x60 kernel/trace/trace.c:8991 event_create_dir+0xa7b/0xdf0 kernel/trace/trace_events.c:2454 __trace_early_add_event_dirs+0x6e/0x1c0 kernel/trace/trace_events.c:3491 early_event_add_tracer+0x52/0x70 kernel/trace/trace_events.c:3664 event_trace_init+0x100/0x180 kernel/trace/trace_events.c:3824 tracer_init_tracefs+0x153/0x2a2 kernel/trace/trace.c:9897 do_one_initcall+0x22b/0x7a0 init/main.c:1300 do_initcall_level+0x157/0x207 init/main.c:1373 do_initcalls+0x49/0x86 init/main.c:1389 kernel_init_freeable+0x425/0x5b5 init/main.c:1613 kernel_init+0x19/0x290 init/main.c:1504 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298 IN-SOFTIRQ-W at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:154 __queue_work+0x56d/0xd00 call_timer_fn+0x16d/0x560 kernel/time/timer.c:1421 expire_timers kernel/time/timer.c:1461 [inline] __run_timers+0x6a8/0x890 kernel/time/timer.c:1737 __do_softirq+0x3b3/0x93a kernel/softirq.c:558 invoke_softirq kernel/softirq.c:432 [inline] __irq_exit_rcu+0x155/0x240 kernel/softirq.c:637 irq_exit_rcu+0x5/0x20 kernel/softirq.c:649 sysvec_apic_timer_interrupt+0x91/0xb0 arch/x86/kernel/apic/apic.c:1096 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:638 native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline] arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline] default_idle+0xb/0x10 arch/x86/kernel/process.c:717 default_idle_call+0x81/0xc0 kernel/sched/idle.c:112 cpuidle_idle_call kernel/sched/idle.c:194 [inline] do_idle+0x271/0x670 kernel/sched/idle.c:306 cpu_startup_entry+0x14/0x20 kernel/sched/idle.c:403 start_kernel+0x48c/0x535 init/main.c:1138 secondary_startup_64_no_verify+0xb1/0xbb INITIAL USE at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0xd1/0x120 kernel/locking/spinlock.c:162 pwq_adjust_max_active+0x14e/0x550 kernel/workqueue.c:3783 link_pwq kernel/workqueue.c:3849 [inline] alloc_and_link_pwqs kernel/workqueue.c:4243 [inline] alloc_workqueue+0xbb4/0x13f0 kernel/workqueue.c:4365 workqueue_init_early+0x7b2/0x96c kernel/workqueue.c:6099 start_kernel+0x1fa/0x535 init/main.c:1025 secondary_startup_64_no_verify+0xb1/0xbb } ... key at: [] init_worker_pool.__key+0x0/0x20 the dependencies between the lock to be acquired and HARDIRQ-irq-unsafe lock: -> (&htab->buckets[i].lock){+...}-{2:2} { HARDIRQ-ON-W at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154 process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310 worker_thread+0xaca/0x1280 kernel/workqueue.c:2457 kthread+0x3f6/0x4f0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298 INITIAL USE at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_free+0x14c/0x780 net/core/sock_map.c:1154 process_one_work+0x8a1/0x10c0 kernel/workqueue.c:2310 worker_thread+0xaca/0x1280 kernel/workqueue.c:2457 kthread+0x3f6/0x4f0 kernel/kthread.c:319 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:298 } ... key at: [] sock_hash_alloc.__key+0x0/0x20 ... acquired at: lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937 bpf_prog_2c29ac5cdc6b1842+0x3a/0x1bc bpf_dispatcher_nop_func include/linux/bpf.h:785 [inline] __bpf_prog_run include/linux/filter.h:628 [inline] bpf_prog_run include/linux/filter.h:635 [inline] __bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline] bpf_trace_run4+0x1ea/0x390 kernel/trace/bpf_trace.c:1919 __bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:201 trace_mm_page_alloc include/trace/events/kmem.h:201 [inline] __alloc_pages+0x6e0/0x700 mm/page_alloc.c:5443 stack_depot_save+0x319/0x440 lib/stackdepot.c:302 save_stack+0x104/0x1e0 mm/page_owner.c:120 __set_page_owner+0x37/0x300 mm/page_owner.c:181 prep_new_page mm/page_alloc.c:2426 [inline] get_page_from_freelist+0x322a/0x33c0 mm/page_alloc.c:4159 __alloc_pages+0x272/0x700 mm/page_alloc.c:5421 stack_depot_save+0x319/0x440 lib/stackdepot.c:302 kasan_save_stack+0x4d/0x60 mm/kasan/common.c:40 kasan_record_aux_stack+0xba/0x100 mm/kasan/generic.c:348 insert_work+0x54/0x3e0 kernel/workqueue.c:1366 __queue_work+0x963/0xd00 kernel/workqueue.c:1532 mod_delayed_work_on+0x101/0x250 kernel/workqueue.c:1753 kblockd_mod_delayed_work_on+0x25/0x30 block/blk-core.c:1636 blk_mq_run_hw_queue+0x1d5/0x3c0 block/blk-mq.c:1641 blk_mq_sched_insert_requests+0x2cd/0x570 block/blk-mq-sched.c:519 blk_mq_flush_plug_list+0x5de/0x6b0 block/blk-mq.c:1965 blk_flush_plug_list+0x44b/0x490 block/blk-core.c:1734 blk_schedule_flush_plug include/linux/blkdev.h:1235 [inline] io_schedule_prepare kernel/sched/core.c:8452 [inline] io_schedule_timeout+0x8a/0x120 kernel/sched/core.c:8471 do_wait_for_common+0x2d9/0x480 kernel/sched/completion.c:85 __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common_io kernel/sched/completion.c:123 [inline] wait_for_completion_io_timeout+0x46/0x60 kernel/sched/completion.c:191 submit_bio_wait+0x145/0x200 block/bio.c:1234 blkdev_issue_zeroout+0x305/0x470 block/blk-lib.c:420 __jbd2_journal_erase fs/jbd2/journal.c:1837 [inline] jbd2_journal_flush+0xa05/0xc90 fs/jbd2/journal.c:2496 ext4_ioctl_checkpoint fs/ext4/ioctl.c:849 [inline] __ext4_ioctl fs/ext4/ioctl.c:1267 [inline] ext4_ioctl+0x3249/0x5b80 fs/ext4/ioctl.c:1276 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:874 [inline] __se_sys_ioctl+0xf1/0x160 fs/ioctl.c:860 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x61/0xcb stack backtrace: CPU: 1 PID: 5276 Comm: syz-executor.2 Not tainted 5.15.153-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 Call Trace: __dump_stack lib/dump_stack.c:88 [inline] dump_stack_lvl+0x1e3/0x2cb lib/dump_stack.c:106 print_bad_irq_dependency kernel/locking/lockdep.c:2567 [inline] check_irq_usage kernel/locking/lockdep.c:2806 [inline] check_prev_add kernel/locking/lockdep.c:3057 [inline] check_prevs_add kernel/locking/lockdep.c:3172 [inline] validate_chain+0x4d01/0x5930 kernel/locking/lockdep.c:3788 __lock_acquire+0x1295/0x1ff0 kernel/locking/lockdep.c:5012 lock_acquire+0x1db/0x4f0 kernel/locking/lockdep.c:5623 __raw_spin_lock_bh include/linux/spinlock_api_smp.h:135 [inline] _raw_spin_lock_bh+0x31/0x40 kernel/locking/spinlock.c:178 sock_hash_delete_elem+0xac/0x2f0 net/core/sock_map.c:937 bpf_prog_2c29ac5cdc6b1842+0x3a/0x1bc bpf_dispatcher_nop_func include/linux/bpf.h:785 [inline] __bpf_prog_run include/linux/filter.h:628 [inline] bpf_prog_run include/linux/filter.h:635 [inline] __bpf_trace_run kernel/trace/bpf_trace.c:1880 [inline] bpf_trace_run4+0x1ea/0x390 kernel/trace/bpf_trace.c:1919 __bpf_trace_mm_page_alloc+0xba/0xe0 include/trace/events/kmem.h:201 trace_mm_page_alloc include/trace/events/kmem.h:201 [inline] __alloc_pages+0x6e0/0x700 mm/page_alloc.c:5443 stack_depot_save+0x319/0x440 lib/stackdepot.c:302 save_stack+0x104/0x1e0 mm/page_owner.c:120 __set_page_owner+0x37/0x300 mm/page_owner.c:181 prep_new_page mm/page_alloc.c:2426 [inline] get_page_from_freelist+0x322a/0x33c0 mm/page_alloc.c:4159 __alloc_pages+0x272/0x700 mm/page_alloc.c:5421 stack_depot_save+0x319/0x440 lib/stackdepot.c:302 kasan_save_stack+0x4d/0x60 mm/kasan/common.c:40 kasan_record_aux_stack+0xba/0x100 mm/kasan/generic.c:348 insert_work+0x54/0x3e0 kernel/workqueue.c:1366 __queue_work+0x963/0xd00 kernel/workqueue.c:1532 mod_delayed_work_on+0x101/0x250 kernel/workqueue.c:1753 kblockd_mod_delayed_work_on+0x25/0x30 block/blk-core.c:1636 blk_mq_run_hw_queue+0x1d5/0x3c0 block/blk-mq.c:1641 blk_mq_sched_insert_requests+0x2cd/0x570 block/blk-mq-sched.c:519 blk_mq_flush_plug_list+0x5de/0x6b0 block/blk-mq.c:1965 blk_flush_plug_list+0x44b/0x490 block/blk-core.c:1734 blk_schedule_flush_plug include/linux/blkdev.h:1235 [inline] io_schedule_prepare kernel/sched/core.c:8452 [inline] io_schedule_timeout+0x8a/0x120 kernel/sched/core.c:8471 do_wait_for_common+0x2d9/0x480 kernel/sched/completion.c:85 __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common_io kernel/sched/completion.c:123 [inline] wait_for_completion_io_timeout+0x46/0x60 kernel/sched/completion.c:191 submit_bio_wait+0x145/0x200 block/bio.c:1234 blkdev_issue_zeroout+0x305/0x470 block/blk-lib.c:420 __jbd2_journal_erase fs/jbd2/journal.c:1837 [inline] jbd2_journal_flush+0xa05/0xc90 fs/jbd2/journal.c:2496 ext4_ioctl_checkpoint fs/ext4/ioctl.c:849 [inline] __ext4_ioctl fs/ext4/ioctl.c:1267 [inline] ext4_ioctl+0x3249/0x5b80 fs/ext4/ioctl.c:1276 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:874 [inline] __se_sys_ioctl+0xf1/0x160 fs/ioctl.c:860 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x61/0xcb RIP: 0033:0x7f765a3bce69 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f765892f0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007f765a4eaf80 RCX: 00007f765a3bce69 RDX: 0000000020000280 RSI: 000000004004662b RDI: 0000000000000003 RBP: 00007f765a40947a R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 000000000000000b R14: 00007f765a4eaf80 R15: 00007fff709a8e48 ------------[ cut here ]------------ raw_local_irq_restore() called with IRQs enabled WARNING: CPU: 1 PID: 5276 at kernel/locking/irqflag-debug.c:10 warn_bogus_irq_restore+0x1d/0x20 kernel/locking/irqflag-debug.c:10 Modules linked in: CPU: 1 PID: 5276 Comm: syz-executor.2 Not tainted 5.15.153-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/27/2024 RIP: 0010:warn_bogus_irq_restore+0x1d/0x20 kernel/locking/irqflag-debug.c:10 Code: 24 48 c7 c7 a0 d1 89 8a e8 6c d1 fe ff 80 3d fc 56 b4 03 00 74 01 c3 c6 05 f2 56 b4 03 01 48 c7 c7 80 0c 8b 8a e8 13 ec 2f f7 <0f> 0b c3 41 56 53 48 83 ec 10 65 48 8b 04 25 28 00 00 00 48 89 44 RSP: 0018:ffffc90002ef7098 EFLAGS: 00010246 RAX: eb7e72cc5db57100 RBX: 0000000000000200 RCX: 0000000000040000 RDX: ffffc9000d653000 RSI: 000000000003ffff RDI: 0000000000040000 RBP: ffffc90002ef7170 R08: ffffffff8166661c R09: fffff520005ded55 R10: 0000000000000000 R11: dffffc0000000001 R12: 0000000000000200 R13: 0000000000000246 R14: 0000000000000000 R15: ffffc90002ef7100 FS: 00007f765892f6c0(0000) GS:ffff8880b9b00000(0000) knlGS:0000000000000000 CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 CR2: 0000001b3202a000 CR3: 0000000061283000 CR4: 00000000003506e0 DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000 DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400 Call Trace: mod_delayed_work_on+0x23d/0x250 kernel/workqueue.c:1754 kblockd_mod_delayed_work_on+0x25/0x30 block/blk-core.c:1636 blk_mq_run_hw_queue+0x1d5/0x3c0 block/blk-mq.c:1641 blk_mq_sched_insert_requests+0x2cd/0x570 block/blk-mq-sched.c:519 blk_mq_flush_plug_list+0x5de/0x6b0 block/blk-mq.c:1965 blk_flush_plug_list+0x44b/0x490 block/blk-core.c:1734 blk_schedule_flush_plug include/linux/blkdev.h:1235 [inline] io_schedule_prepare kernel/sched/core.c:8452 [inline] io_schedule_timeout+0x8a/0x120 kernel/sched/core.c:8471 do_wait_for_common+0x2d9/0x480 kernel/sched/completion.c:85 __wait_for_common kernel/sched/completion.c:106 [inline] wait_for_common_io kernel/sched/completion.c:123 [inline] wait_for_completion_io_timeout+0x46/0x60 kernel/sched/completion.c:191 submit_bio_wait+0x145/0x200 block/bio.c:1234 blkdev_issue_zeroout+0x305/0x470 block/blk-lib.c:420 __jbd2_journal_erase fs/jbd2/journal.c:1837 [inline] jbd2_journal_flush+0xa05/0xc90 fs/jbd2/journal.c:2496 ext4_ioctl_checkpoint fs/ext4/ioctl.c:849 [inline] __ext4_ioctl fs/ext4/ioctl.c:1267 [inline] ext4_ioctl+0x3249/0x5b80 fs/ext4/ioctl.c:1276 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:874 [inline] __se_sys_ioctl+0xf1/0x160 fs/ioctl.c:860 do_syscall_x64 arch/x86/entry/common.c:50 [inline] do_syscall_64+0x3d/0xb0 arch/x86/entry/common.c:80 entry_SYSCALL_64_after_hwframe+0x61/0xcb RIP: 0033:0x7f765a3bce69 Code: 28 00 00 00 75 05 48 83 c4 28 c3 e8 e1 20 00 00 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 b0 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007f765892f0c8 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007f765a4eaf80 RCX: 00007f765a3bce69 RDX: 0000000020000280 RSI: 000000004004662b RDI: 0000000000000003 RBP: 00007f765a40947a R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 000000000000000b R14: 00007f765a4eaf80 R15: 00007fff709a8e48