syzbot


possible deadlock in diFree (3)

Status: upstream: reported on 2025/12/10 21:46
Subsystems: jfs
[Documentation on labels]
Reported-by: syzbot+1bcae2d9e9040bb283cc@syzkaller.appspotmail.com
First crash: 102d, last: 36d
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [jfs?] possible deadlock in diFree (3) 0 (1) 2025/12/10 21:46
Similar bugs (3)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream possible deadlock in diFree (2) jfs 4 C 40 357d 482d 28/29 fixed on 2025/06/10 16:19
upstream possible deadlock in diFree jfs 4 91 589d 695d 0/29 auto-obsoleted due to no activity on 2024/10/15 15:16
linux-6.1 possible deadlock in diFree 4 1 358d 358d 0/3 auto-obsoleted due to no activity on 2025/07/03 19:22

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
kswapd0/77 is trying to acquire lock:
ffff888042840920 (&(imap->im_aglock[index])){+.+.}-{4:4}, at: diFree+0x2e9/0x2ca0 fs/jfs/jfs_imap.c:889

but task is already holding lock:
ffffffff8e87c3a0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6975 [inline]
ffffffff8e87c3a0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x90d/0x2800 mm/vmscan.c:7354

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #3 (fs_reclaim){+.+.}-{0:0}:
       __fs_reclaim_acquire mm/page_alloc.c:4331 [inline]
       fs_reclaim_acquire+0x71/0x100 mm/page_alloc.c:4345
       might_alloc include/linux/sched/mm.h:317 [inline]
       slab_pre_alloc_hook mm/slub.c:4904 [inline]
       slab_alloc_node mm/slub.c:5239 [inline]
       __do_kmalloc_node mm/slub.c:5656 [inline]
       __kmalloc_noprof+0x9c/0x7e0 mm/slub.c:5669
       kmalloc_noprof include/linux/slab.h:961 [inline]
       posix_acl_to_xattr+0x67/0x420 fs/posix_acl.c:842
       __jfs_set_acl fs/jfs/acl.c:79 [inline]
       jfs_set_acl+0x293/0x460 fs/jfs/acl.c:110
       set_posix_acl fs/posix_acl.c:955 [inline]
       vfs_set_acl+0x8ff/0xc00 fs/posix_acl.c:1134
       do_set_acl+0xf5/0x190 fs/posix_acl.c:1279
       do_setxattr fs/xattr.c:633 [inline]
       filename_setxattr+0x305/0x630 fs/xattr.c:664
       path_setxattrat+0x3eb/0x440 fs/xattr.c:708
       __do_sys_setxattr fs/xattr.c:742 [inline]
       __se_sys_setxattr fs/xattr.c:738 [inline]
       __x64_sys_setxattr+0xbc/0xe0 fs/xattr.c:738
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #2 (&jfs_ip->commit_mutex){+.+.}-{4:4}:
       __mutex_lock_common kernel/locking/mutex.c:614 [inline]
       __mutex_lock+0x19f/0x1300 kernel/locking/mutex.c:776
       diNewIAG fs/jfs/jfs_imap.c:2522 [inline]
       diAllocExt fs/jfs/jfs_imap.c:1905 [inline]
       diAllocAG+0x145b/0x1db0 fs/jfs/jfs_imap.c:1669
       diAlloc+0x1d5/0x1680 fs/jfs/jfs_imap.c:1590
       ialloc+0x8c/0x8f0 fs/jfs/jfs_inode.c:56
       jfs_mkdir+0x1e1/0xb00 fs/jfs/namei.c:226
       vfs_mkdir+0x413/0x630 fs/namei.c:5233
       filename_mkdirat+0x285/0x510 fs/namei.c:5266
       __do_sys_mkdirat fs/namei.c:5287 [inline]
       __se_sys_mkdirat+0x35/0x150 fs/namei.c:5284
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #1 (&jfs_ip->rdwrlock/1){++++}-{4:4}:
       down_read_nested+0x49/0x2e0 kernel/locking/rwsem.c:1662
       diAlloc+0x795/0x1680 fs/jfs/jfs_imap.c:1388
       ialloc+0x8c/0x8f0 fs/jfs/jfs_inode.c:56
       jfs_create+0x1da/0xb10 fs/jfs/namei.c:93
       lookup_open fs/namei.c:4483 [inline]
       open_last_lookups fs/namei.c:4583 [inline]
       path_openat+0x1395/0x3860 fs/namei.c:4827
       do_file_open+0x23e/0x4a0 fs/namei.c:4859
       do_sys_openat2+0x113/0x200 fs/open.c:1366
       do_sys_open fs/open.c:1372 [inline]
       __do_sys_openat fs/open.c:1388 [inline]
       __se_sys_openat fs/open.c:1383 [inline]
       __x64_sys_openat+0x138/0x170 fs/open.c:1383
       do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
       do_syscall_64+0xe2/0xf80 arch/x86/entry/syscall_64.c:94
       entry_SYSCALL_64_after_hwframe+0x77/0x7f

-> #0 (&(imap->im_aglock[index])){+.+.}-{4:4}:
       check_prev_add kernel/locking/lockdep.c:3165 [inline]
       check_prevs_add kernel/locking/lockdep.c:3284 [inline]
       validate_chain kernel/locking/lockdep.c:3908 [inline]
       __lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
       lock_acquire+0x106/0x330 kernel/locking/lockdep.c:5868
       __mutex_lock_common kernel/locking/mutex.c:614 [inline]
       __mutex_lock+0x19f/0x1300 kernel/locking/mutex.c:776
       diFree+0x2e9/0x2ca0 fs/jfs/jfs_imap.c:889
       jfs_evict_inode+0x331/0x440 fs/jfs/inode.c:162
       evict+0x61e/0xb10 fs/inode.c:837
       __dentry_kill+0x1a2/0x5e0 fs/dcache.c:670
       shrink_kill+0xa9/0x2c0 fs/dcache.c:1147
       shrink_dentry_list+0x2e0/0x5e0 fs/dcache.c:1174
       prune_dcache_sb+0x119/0x180 fs/dcache.c:1256
       super_cache_scan+0x369/0x4b0 fs/super.c:223
       do_shrink_slab+0x6df/0x10d0 mm/shrinker.c:437
       shrink_slab_memcg mm/shrinker.c:550 [inline]
       shrink_slab+0x830/0x1150 mm/shrinker.c:628
       shrink_one+0x2d9/0x710 mm/vmscan.c:4921
       shrink_many mm/vmscan.c:4982 [inline]
       lru_gen_shrink_node mm/vmscan.c:5060 [inline]
       shrink_node+0x2f8b/0x35f0 mm/vmscan.c:6047
       kswapd_shrink_node mm/vmscan.c:6901 [inline]
       balance_pgdat mm/vmscan.c:7084 [inline]
       kswapd+0x144c/0x2800 mm/vmscan.c:7354
       kthread+0x388/0x470 kernel/kthread.c:467
       ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158
       ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246

other info that might help us debug this:

Chain exists of:
  &(imap->im_aglock[index]) --> &jfs_ip->commit_mutex --> fs_reclaim

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(fs_reclaim);
                               lock(&jfs_ip->commit_mutex);
                               lock(fs_reclaim);
  lock(&(imap->im_aglock[index]));

 *** DEADLOCK ***

2 locks held by kswapd0/77:
 #0: ffffffff8e87c3a0 (fs_reclaim){+.+.}-{0:0}, at: balance_pgdat mm/vmscan.c:6975 [inline]
 #0: ffffffff8e87c3a0 (fs_reclaim){+.+.}-{0:0}, at: kswapd+0x90d/0x2800 mm/vmscan.c:7354
 #1: ffff88804451e0e0 (&type->s_umount_key#53){.+.+}-{4:4}, at: super_trylock_shared fs/super.c:565 [inline]
 #1: ffff88804451e0e0 (&type->s_umount_key#53){.+.+}-{4:4}, at: super_cache_scan+0x91/0x4b0 fs/super.c:198

stack backtrace:
CPU: 0 UID: 0 PID: 77 Comm: kswapd0 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: QEMU Standard PC (Q35 + ICH9, 2009), BIOS 1.16.3-debian-1.16.3-2 04/01/2014
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043
 check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175
 check_prev_add kernel/locking/lockdep.c:3165 [inline]
 check_prevs_add kernel/locking/lockdep.c:3284 [inline]
 validate_chain kernel/locking/lockdep.c:3908 [inline]
 __lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237
 lock_acquire+0x106/0x330 kernel/locking/lockdep.c:5868
 __mutex_lock_common kernel/locking/mutex.c:614 [inline]
 __mutex_lock+0x19f/0x1300 kernel/locking/mutex.c:776
 diFree+0x2e9/0x2ca0 fs/jfs/jfs_imap.c:889
 jfs_evict_inode+0x331/0x440 fs/jfs/inode.c:162
 evict+0x61e/0xb10 fs/inode.c:837
 __dentry_kill+0x1a2/0x5e0 fs/dcache.c:670
 shrink_kill+0xa9/0x2c0 fs/dcache.c:1147
 shrink_dentry_list+0x2e0/0x5e0 fs/dcache.c:1174
 prune_dcache_sb+0x119/0x180 fs/dcache.c:1256
 super_cache_scan+0x369/0x4b0 fs/super.c:223
 do_shrink_slab+0x6df/0x10d0 mm/shrinker.c:437
 shrink_slab_memcg mm/shrinker.c:550 [inline]
 shrink_slab+0x830/0x1150 mm/shrinker.c:628
 shrink_one+0x2d9/0x710 mm/vmscan.c:4921
 shrink_many mm/vmscan.c:4982 [inline]
 lru_gen_shrink_node mm/vmscan.c:5060 [inline]
 shrink_node+0x2f8b/0x35f0 mm/vmscan.c:6047
 kswapd_shrink_node mm/vmscan.c:6901 [inline]
 balance_pgdat mm/vmscan.c:7084 [inline]
 kswapd+0x144c/0x2800 mm/vmscan.c:7354
 kthread+0x388/0x470 kernel/kthread.c:467
 ret_from_fork+0x51b/0xa40 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>

Crashes (3):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/10 18:08 upstream 72c395024dac a076df6f .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in diFree
2026/02/09 01:08 upstream e98f34af6116 4c131dc4 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in diFree
2025/12/06 21:41 upstream 416f99c3b16f d1b870e1 .config console log report [disk image (non-bootable)] [vmlinux] [kernel image] ci-snapshot-upstream-root possible deadlock in diFree
* Struck through repros no longer work on HEAD.