syzbot


possible deadlock in __jbd2_log_wait_for_space

Status: fixed on 2023/06/08 14:41
Subsystems: ext4
[Documentation on labels]
Reported-by: syzbot+fa1bbda326271b8808c9@syzkaller.appspotmail.com
Fix commit: 62913ae96de7 ext4, jbd2: add an optimized bmap for the journal inode
First crash: 624d, last: 375d
Cause bisection: failed (error log, bisect log)
  
Discussions (2)
Title Replies (including bot) Last reply
[syzbot] possible deadlock in __jbd2_log_wait_for_space 1 (3) 2023/05/01 14:46
[syzbot] [ext4] Monthly Report 0 (1) 2023/03/24 15:59
Similar bugs (2)
Kernel Title Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-5.15 possible deadlock in __jbd2_log_wait_for_space origin:lts-only C inconclusive 1160 6d02h 413d 0/3 upstream: reported C repro on 2023/03/07 17:44
linux-6.1 possible deadlock in __jbd2_log_wait_for_space origin:lts-only C done 1021 1d05h 412d 0/3 upstream: reported C repro on 2023/03/07 18:21
Last patch testing requests (2)
Created Duration User Patch Repo Result
2023/03/25 02:04 21m hdanton@sina.com patch https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git master OK log
2022/11/19 09:44 22m hdanton@sina.com patch https://git.kernel.org/pub/scm/linux/kernel/git/next/linux-next.git 15f3bff12cf6 report log

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
6.2.0-rc1-syzkaller-00084-gc8451c141e07 #0 Not tainted
------------------------------------------------------
syz-executor945/5157 is trying to acquire lock:
ffff8880282f23f8 (&journal->j_checkpoint_mutex){+.+.}-{3:3}, at: __jbd2_log_wait_for_space+0x22d/0x790 fs/jbd2/checkpoint.c:110

but task is already holding lock:
ffff8880739bca38 (&sb->s_type->i_mutex_key#7){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
ffff8880739bca38 (&sb->s_type->i_mutex_key#7){++++}-{3:3}, at: ext4_buffered_write_iter+0xae/0x3a0 fs/ext4/file.c:279

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&sb->s_type->i_mutex_key#7){++++}-{3:3}:
       lock_acquire+0x1a7/0x400 kernel/locking/lockdep.c:5668
       down_read+0x39/0x50 kernel/locking/rwsem.c:1509
       inode_lock_shared include/linux/fs.h:766 [inline]
       ext4_bmap+0x55/0x410 fs/ext4/inode.c:3243
       bmap+0xa1/0xd0 fs/inode.c:1798
       jbd2_journal_bmap fs/jbd2/journal.c:977 [inline]
       __jbd2_journal_erase fs/jbd2/journal.c:1789 [inline]
       jbd2_journal_flush+0x5d0/0xca0 fs/jbd2/journal.c:2492
       ext4_ioctl_checkpoint fs/ext4/ioctl.c:1082 [inline]
       __ext4_ioctl fs/ext4/ioctl.c:1590 [inline]
       ext4_ioctl+0x3288/0x5400 fs/ext4/ioctl.c:1610
       vfs_ioctl fs/ioctl.c:51 [inline]
       __do_sys_ioctl fs/ioctl.c:870 [inline]
       __se_sys_ioctl+0xfb/0x170 fs/ioctl.c:856
       do_syscall_x64 arch/x86/entry/common.c:50 [inline]
       do_syscall_64+0x2b/0x70 arch/x86/entry/common.c:80
       entry_SYSCALL_64_after_hwframe+0x63/0xcd

-> #0 (&journal->j_checkpoint_mutex){+.+.}-{3:3}:
       check_prev_add kernel/locking/lockdep.c:3097 [inline]
       check_prevs_add kernel/locking/lockdep.c:3216 [inline]
       validate_chain+0x184a/0x6470 kernel/locking/lockdep.c:3831
       __lock_acquire+0x1292/0x1f60 kernel/locking/lockdep.c:5055
       lock_acquire+0x1a7/0x400 kernel/locking/lockdep.c:5668
       __mutex_lock_common+0x1de/0x26c0 kernel/locking/mutex.c:603
       mutex_lock_io_nested+0x43/0x60 kernel/locking/mutex.c:833
       __jbd2_log_wait_for_space+0x22d/0x790 fs/jbd2/checkpoint.c:110
       add_transaction_credits+0x936/0xbf0 fs/jbd2/transaction.c:298
       start_this_handle+0x758/0x1660 fs/jbd2/transaction.c:422
       jbd2__journal_start+0x2ca/0x5b0 fs/jbd2/transaction.c:520
       __ext4_journal_start_sb+0x13b/0x1f0 fs/ext4/ext4_jbd2.c:111
       __ext4_journal_start fs/ext4/ext4_jbd2.h:326 [inline]
       ext4_dirty_inode+0x8d/0x100 fs/ext4/inode.c:6107
       __mark_inode_dirty+0x1e7/0x600 fs/fs-writeback.c:2419
       generic_update_time fs/inode.c:1859 [inline]
       inode_update_time fs/inode.c:1872 [inline]
       __file_update_time fs/inode.c:2057 [inline]
       file_modified_flags+0x69a/0x700 fs/inode.c:2130
       ext4_write_checks+0x249/0x2c0 fs/ext4/file.c:264
       ext4_buffered_write_iter+0xbc/0x3a0 fs/ext4/file.c:280
       ext4_file_write_iter+0x1d0/0x1900
       __kernel_write_iter+0x323/0x770 fs/read_write.c:517
       dump_emit_page+0xa79/0xca0 fs/coredump.c:864
       dump_user_range+0x5b/0xf0 fs/coredump.c:915
       elf_core_dump+0x3ca3/0x45d0 fs/binfmt_elf.c:2137
       do_coredump+0x180a/0x27d0 fs/coredump.c:762
       get_signal+0x1490/0x1820 kernel/signal.c:2845
       arch_do_signal_or_restart+0x8d/0x5f0 arch/x86/kernel/signal.c:306
       exit_to_user_mode_loop+0x74/0x160 kernel/entry/common.c:168
       exit_to_user_mode_prepare+0xad/0x110 kernel/entry/common.c:203
       irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:309
       exc_page_fault+0xa2/0x120 arch/x86/mm/fault.c:1578
       asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:570

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&sb->s_type->i_mutex_key#7);
                               lock(&journal->j_checkpoint_mutex);
                               lock(&sb->s_type->i_mutex_key#7);
  lock(&journal->j_checkpoint_mutex);

 *** DEADLOCK ***

2 locks held by syz-executor945/5157:
 #0: ffff888028d4e460 (sb_writers#4){.+.+}-{0:0}, at: do_coredump+0x17e5/0x27d0 fs/coredump.c:761
 #1: ffff8880739bca38 (&sb->s_type->i_mutex_key#7){++++}-{3:3}, at: inode_lock include/linux/fs.h:756 [inline]
 #1: ffff8880739bca38 (&sb->s_type->i_mutex_key#7){++++}-{3:3}, at: ext4_buffered_write_iter+0xae/0x3a0 fs/ext4/file.c:279

stack backtrace:
CPU: 1 PID: 5157 Comm: syz-executor945 Not tainted 6.2.0-rc1-syzkaller-00084-gc8451c141e07 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/26/2022
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:88 [inline]
 dump_stack_lvl+0x1e3/0x2d0 lib/dump_stack.c:106
 check_noncircular+0x2f9/0x3b0 kernel/locking/lockdep.c:2177
 check_prev_add kernel/locking/lockdep.c:3097 [inline]
 check_prevs_add kernel/locking/lockdep.c:3216 [inline]
 validate_chain+0x184a/0x6470 kernel/locking/lockdep.c:3831
 __lock_acquire+0x1292/0x1f60 kernel/locking/lockdep.c:5055
 lock_acquire+0x1a7/0x400 kernel/locking/lockdep.c:5668
 __mutex_lock_common+0x1de/0x26c0 kernel/locking/mutex.c:603
 mutex_lock_io_nested+0x43/0x60 kernel/locking/mutex.c:833
 __jbd2_log_wait_for_space+0x22d/0x790 fs/jbd2/checkpoint.c:110
 add_transaction_credits+0x936/0xbf0 fs/jbd2/transaction.c:298
 start_this_handle+0x758/0x1660 fs/jbd2/transaction.c:422
 jbd2__journal_start+0x2ca/0x5b0 fs/jbd2/transaction.c:520
 __ext4_journal_start_sb+0x13b/0x1f0 fs/ext4/ext4_jbd2.c:111
 __ext4_journal_start fs/ext4/ext4_jbd2.h:326 [inline]
 ext4_dirty_inode+0x8d/0x100 fs/ext4/inode.c:6107
 __mark_inode_dirty+0x1e7/0x600 fs/fs-writeback.c:2419
 generic_update_time fs/inode.c:1859 [inline]
 inode_update_time fs/inode.c:1872 [inline]
 __file_update_time fs/inode.c:2057 [inline]
 file_modified_flags+0x69a/0x700 fs/inode.c:2130
 ext4_write_checks+0x249/0x2c0 fs/ext4/file.c:264
 ext4_buffered_write_iter+0xbc/0x3a0 fs/ext4/file.c:280
 ext4_file_write_iter+0x1d0/0x1900
 __kernel_write_iter+0x323/0x770 fs/read_write.c:517
 dump_emit_page+0xa79/0xca0 fs/coredump.c:864
 dump_user_range+0x5b/0xf0 fs/coredump.c:915
 elf_core_dump+0x3ca3/0x45d0 fs/binfmt_elf.c:2137
 do_coredump+0x180a/0x27d0 fs/coredump.c:762
 get_signal+0x1490/0x1820 kernel/signal.c:2845
 arch_do_signal_or_restart+0x8d/0x5f0 arch/x86/kernel/signal.c:306
 exit_to_user_mode_loop+0x74/0x160 kernel/entry/common.c:168
 exit_to_user_mode_prepare+0xad/0x110 kernel/entry/common.c:203
 irqentry_exit_to_user_mode+0x5/0x30 kernel/entry/common.c:309
 exc_page_fault+0xa2/0x120 arch/x86/mm/fault.c:1578
 asm_exc_page_fault+0x22/0x30 arch/x86/include/asm/idtentry.h:570
RIP: 0033:0x7fe326900784
Code: 01 8c 0b 00 01 5d c3 0f 1f 80 00 00 00 00 c3 0f 1f 80 00 00 00 00 e9 7b ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 90 48 83 ec 08 <64> 8b 04 25 bc ff ff ff 85 c0 75 40 48 8b 46 10 64 8b 14 25 b8 ff
RSP: 002b:00007ffe337e0370 EFLAGS: 00010202
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fe32693f8c9
RDX: 00007ffe337e0380 RSI: 00007ffe337e04b0 RDI: 000000000000000b
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000040b86000
R10: 00007ffe338de4d0 R11: 0000000000000246 R12: 0000000000087ece
R13: 00007ffe338de640 R14: 00007ffe338de630 R15: 00007ffe338de5fc
 </TASK>
syz-executor945 (5157) used greatest stack depth: 18360 bytes left

Crashes (34993):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2022/12/31 12:06 upstream c8451c141e07 ab32d508 .config console log report syz C [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in __jbd2_log_wait_for_space
2022/12/31 09:06 upstream c8451c141e07 ab32d508 .config console log report syz C [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in __jbd2_log_wait_for_space
2022/11/19 04:50 linux-next 15f3bff12cf6 5bb70014 .config strace log report syz C [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in __jbd2_log_wait_for_space
2023/02/21 03:57 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci a9b06ec42c0f 4f5f5209 .config console log report syz C ci-upstream-gce-arm64 possible deadlock in __jbd2_log_wait_for_space
2023/01/24 16:52 linux-next 691781f561e9 9dfcf09c .config console log report syz [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in __jbd2_log_wait_for_space
2023/02/08 10:26 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci eaed33698e35 15c3d445 .config console log report syz ci-upstream-gce-arm64 possible deadlock in __jbd2_log_wait_for_space
2022/12/04 15:00 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci e3cb714fb489 e080de16 .config console log report syz [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 possible deadlock in __jbd2_log_wait_for_space
2023/03/06 01:59 upstream f915322fe014 f8902b57 .config console log report info ci-upstream-kasan-gce-selinux-root possible deadlock in __jbd2_log_wait_for_space
2023/03/05 21:16 upstream f915322fe014 f8902b57 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root possible deadlock in __jbd2_log_wait_for_space
2023/03/05 17:19 upstream b01fe98d34f3 f8902b57 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs possible deadlock in __jbd2_log_wait_for_space
2023/03/05 08:14 upstream b01fe98d34f3 f8902b57 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root possible deadlock in __jbd2_log_wait_for_space
2023/03/04 23:17 upstream c29214bc8916 f8902b57 .config console log report info ci-qemu-upstream possible deadlock in __jbd2_log_wait_for_space
2023/03/05 01:06 upstream c29214bc8916 f8902b57 .config console log report info ci-qemu-upstream-386 possible deadlock in __jbd2_log_wait_for_space
2023/03/10 03:24 net 67eeadf2f953 f08b59ac .config console log report info ci-upstream-net-this-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/03/07 04:15 bpf b7abcd9c656b f8902b57 .config console log report info ci-upstream-bpf-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/14 00:38 bpf-next d319f344561d 3cfcaa1b .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/13 23:12 bpf-next d319f344561d 3cfcaa1b .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/13 20:41 bpf-next d319f344561d 3cfcaa1b .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/13 17:26 bpf-next d319f344561d 3cfcaa1b .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/13 16:12 bpf-next d319f344561d 3cfcaa1b .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/13 05:13 bpf-next 10fd5f70c397 82d5e53e .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/13 02:56 bpf-next 10fd5f70c397 82d5e53e .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/13 01:40 bpf-next 10fd5f70c397 82d5e53e .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/13 00:02 bpf-next 10fd5f70c397 82d5e53e .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/12 16:14 bpf-next 75dcef8d3609 1a1596b6 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/12 15:13 bpf-next 75dcef8d3609 1a1596b6 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/12 13:56 bpf-next 75dcef8d3609 1a1596b6 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/12 03:51 bpf-next eafa92152e2e 49faf98d .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/11 22:54 bpf-next eafa92152e2e 49faf98d .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/11 07:56 bpf-next c4d3b488a90b 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/11 00:53 bpf-next c4d3b488a90b 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/10 21:31 bpf-next c4d3b488a90b 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/10 10:55 bpf-next 5855b0999de4 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/09 12:39 bpf-next 5855b0999de4 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/09 08:48 bpf-next 5855b0999de4 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/09 07:19 bpf-next 5855b0999de4 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/09 03:24 bpf-next 5855b0999de4 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/09 02:02 bpf-next 5855b0999de4 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/08 23:20 bpf-next 5855b0999de4 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/08 20:35 bpf-next 5855b0999de4 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/08 10:02 bpf-next 3ebf5212bf04 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/08 06:19 bpf-next 3ebf5212bf04 71147e29 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/07 15:44 bpf-next f3f213497797 f7ba566d .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/07 02:03 bpf-next a5f1da6601a0 00ce4c67 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/06 14:55 bpf-next 5af607a861d4 08707520 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/06 12:57 bpf-next 5af607a861d4 08707520 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/06 00:55 bpf-next d099f594ad56 8b834965 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/05 21:10 bpf-next d099f594ad56 8b834965 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/05 19:19 bpf-next d099f594ad56 8b834965 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/05 11:13 bpf-next 8fc59c26d212 831373d3 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/05 05:30 bpf-next 8fc59c26d212 831373d3 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/04 18:23 bpf-next f6a6a5a97628 928dd177 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/04 15:14 bpf-next f6a6a5a97628 928dd177 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/04 05:01 bpf-next 16b7c970cc81 7db618d0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/03 14:07 bpf-next 16b7c970cc81 41147e3e .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/02 16:03 bpf-next 5b85575ad428 f325deb0 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/02 13:32 bpf-next 5b85575ad428 f325deb0 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/02 02:50 bpf-next a033907e7b34 f325deb0 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/04/01 14:15 bpf-next 9af0f555ae4a f325deb0 .config console log report info ci-upstream-bpf-next-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/03/10 07:26 net-next db47fa2e4cbf 5205ef30 .config console log report info ci-upstream-net-kasan-gce possible deadlock in __jbd2_log_wait_for_space
2023/03/06 04:03 linux-next dc837c1a5137 f8902b57 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root possible deadlock in __jbd2_log_wait_for_space
2022/08/08 06:20 linux-next ca688bff68bc 88e3a122 .config console log report info ci-upstream-linux-next-kasan-gce-root possible deadlock in __jbd2_log_wait_for_space
2022/08/08 04:29 linux-next ca688bff68bc 88e3a122 .config console log report info ci-upstream-linux-next-kasan-gce-root possible deadlock in __jbd2_log_wait_for_space
2023/03/22 18:05 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci fe15c26ee26e d846e076 .config console log report info ci-upstream-gce-arm64 possible deadlock in __jbd2_log_wait_for_space
* Struck through repros no longer work on HEAD.