Title | Replies (including bot) | Last reply |
---|---|---|
possible deadlock in proc_pid_attr_write (2) | 0 (1) | 2020/02/19 17:37 |
syzbot |
sign-in | mailing list | source | docs |
Title | Replies (including bot) | Last reply |
---|---|---|
possible deadlock in proc_pid_attr_write (2) | 0 (1) | 2020/02/19 17:37 |
Kernel | Title | Repro | Cause bisect | Fix bisect | Count | Last | Reported | Patched | Status |
---|---|---|---|---|---|---|---|---|---|
android-414 | possible deadlock in proc_pid_attr_write | C | 203 | 2249d | 2235d | 0/1 | public: reported C repro on 2019/04/11 00:00 | ||
linux-4.14 | possible deadlock in proc_pid_attr_write | 3 | 1946d | 2035d | 0/1 | auto-closed as invalid on 2020/05/23 15:46 | |||
linux-4.19 | possible deadlock in proc_pid_attr_write | 1 | 1709d | 1709d | 0/1 | auto-closed as invalid on 2021/01/16 05:53 | |||
upstream | possible deadlock in proc_pid_attr_write fs | C | 281 | 2248d | 2723d | 0/28 | closed as dup on 2017/12/12 22:00 | ||
linux-4.14 | possible deadlock in proc_pid_attr_write (2) | 1 | 1492d | 1492d | 0/1 | auto-closed as invalid on 2021/08/21 09:25 | |||
linux-4.19 | possible deadlock in proc_pid_attr_write (2) | 1 | 1336d | 1336d | 0/1 | auto-closed as invalid on 2022/01/23 20:06 |
====================================================== WARNING: possible circular locking dependency detected 5.6.0-rc1-next-20200214-syzkaller #0 Not tainted ------------------------------------------------------ syz-executor.2/3299 is trying to acquire lock: ffff88804c4e5150 (&sig->cred_guard_mutex){+.+.}, at: proc_pid_attr_write+0x268/0x6b0 fs/proc/base.c:2683 but task is already holding lock: ffff888099519c60 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:66 [inline] ffff888099519c60 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x65/0x80 fs/pipe.c:74 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #3 (&pipe->mutex/1){+.+.}: __mutex_lock_common kernel/locking/mutex.c:956 [inline] __mutex_lock+0x156/0x13c0 kernel/locking/mutex.c:1103 mutex_lock_nested+0x16/0x20 kernel/locking/mutex.c:1118 pipe_lock_nested fs/pipe.c:66 [inline] pipe_lock+0x65/0x80 fs/pipe.c:74 iter_file_splice_write+0x18b/0xc10 fs/splice.c:709 do_splice_from fs/splice.c:863 [inline] do_splice+0xbae/0x1690 fs/splice.c:1170 __do_sys_splice fs/splice.c:1447 [inline] __se_sys_splice fs/splice.c:1427 [inline] __x64_sys_splice+0x2c6/0x330 fs/splice.c:1427 do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #2 (sb_writers#3){.+.+}: percpu_down_read include/linux/percpu-rwsem.h:51 [inline] __sb_start_write+0x255/0x4a0 fs/super.c:1674 sb_start_write include/linux/fs.h:1649 [inline] mnt_want_write+0x3f/0xc0 fs/namespace.c:354 ovl_want_write+0x76/0xa0 fs/overlayfs/util.c:21 ovl_setattr+0xdd/0x930 fs/overlayfs/inode.c:27 notify_change+0xb6d/0x1060 fs/attr.c:336 chmod_common+0x217/0x460 fs/open.c:561 do_fchmodat+0xbe/0x150 fs/open.c:599 __do_sys_chmod fs/open.c:617 [inline] __se_sys_chmod fs/open.c:615 [inline] __x64_sys_chmod+0x5c/0x80 fs/open.c:615 do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #1 (&ovl_i_mutex_dir_key[depth]){++++}: down_read+0x95/0x440 kernel/locking/rwsem.c:1492 inode_lock_shared include/linux/fs.h:801 [inline] do_last fs/namei.c:3400 [inline] path_openat+0x1c4f/0x33f0 fs/namei.c:3607 do_filp_open+0x192/0x260 fs/namei.c:3637 do_open_execat+0x13b/0x6d0 fs/exec.c:860 __do_execve_file.isra.0+0x16d5/0x2270 fs/exec.c:1791 do_execveat_common fs/exec.c:1897 [inline] do_execve fs/exec.c:1914 [inline] __do_sys_execve fs/exec.c:1990 [inline] __se_sys_execve fs/exec.c:1985 [inline] __x64_sys_execve+0x8f/0xc0 fs/exec.c:1985 do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe -> #0 (&sig->cred_guard_mutex){+.+.}: check_prev_add kernel/locking/lockdep.c:2481 [inline] check_prevs_add kernel/locking/lockdep.c:2586 [inline] validate_chain kernel/locking/lockdep.c:3203 [inline] __lock_acquire+0x29cd/0x6320 kernel/locking/lockdep.c:4190 lock_acquire+0x190/0x410 kernel/locking/lockdep.c:4720 __mutex_lock_common kernel/locking/mutex.c:956 [inline] __mutex_lock+0x156/0x13c0 kernel/locking/mutex.c:1103 mutex_lock_interruptible_nested+0x16/0x20 kernel/locking/mutex.c:1140 proc_pid_attr_write+0x268/0x6b0 fs/proc/base.c:2683 __vfs_write+0x8a/0x110 fs/read_write.c:494 __kernel_write+0x11b/0x3b0 fs/read_write.c:515 write_pipe_buf+0x15d/0x1f0 fs/splice.c:809 splice_from_pipe_feed fs/splice.c:512 [inline] __splice_from_pipe+0x3ee/0x7c0 fs/splice.c:636 splice_from_pipe+0x108/0x170 fs/splice.c:671 default_file_splice_write+0x3c/0x90 fs/splice.c:821 do_splice_from fs/splice.c:863 [inline] do_splice+0xbae/0x1690 fs/splice.c:1170 __do_sys_splice fs/splice.c:1447 [inline] __se_sys_splice fs/splice.c:1427 [inline] __x64_sys_splice+0x2c6/0x330 fs/splice.c:1427 do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe other info that might help us debug this: Chain exists of: &sig->cred_guard_mutex --> sb_writers#3 --> &pipe->mutex/1 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&pipe->mutex/1); lock(sb_writers#3); lock(&pipe->mutex/1); lock(&sig->cred_guard_mutex); *** DEADLOCK *** 2 locks held by syz-executor.2/3299: #0: ffff8880a9996418 (sb_writers#6){.+.+}, at: file_start_write include/linux/fs.h:2903 [inline] #0: ffff8880a9996418 (sb_writers#6){.+.+}, at: do_splice+0xf52/0x1690 fs/splice.c:1169 #1: ffff888099519c60 (&pipe->mutex/1){+.+.}, at: pipe_lock_nested fs/pipe.c:66 [inline] #1: ffff888099519c60 (&pipe->mutex/1){+.+.}, at: pipe_lock+0x65/0x80 fs/pipe.c:74 stack backtrace: CPU: 1 PID: 3299 Comm: syz-executor.2 Not tainted 5.6.0-rc1-next-20200214-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x197/0x210 lib/dump_stack.c:118 print_circular_bug.isra.0.cold+0x163/0x172 kernel/locking/lockdep.c:1688 check_noncircular+0x32e/0x3e0 kernel/locking/lockdep.c:1812 check_prev_add kernel/locking/lockdep.c:2481 [inline] check_prevs_add kernel/locking/lockdep.c:2586 [inline] validate_chain kernel/locking/lockdep.c:3203 [inline] __lock_acquire+0x29cd/0x6320 kernel/locking/lockdep.c:4190 lock_acquire+0x190/0x410 kernel/locking/lockdep.c:4720 __mutex_lock_common kernel/locking/mutex.c:956 [inline] __mutex_lock+0x156/0x13c0 kernel/locking/mutex.c:1103 mutex_lock_interruptible_nested+0x16/0x20 kernel/locking/mutex.c:1140 proc_pid_attr_write+0x268/0x6b0 fs/proc/base.c:2683 __vfs_write+0x8a/0x110 fs/read_write.c:494 __kernel_write+0x11b/0x3b0 fs/read_write.c:515 write_pipe_buf+0x15d/0x1f0 fs/splice.c:809 splice_from_pipe_feed fs/splice.c:512 [inline] __splice_from_pipe+0x3ee/0x7c0 fs/splice.c:636 splice_from_pipe+0x108/0x170 fs/splice.c:671 default_file_splice_write+0x3c/0x90 fs/splice.c:821 do_splice_from fs/splice.c:863 [inline] do_splice+0xbae/0x1690 fs/splice.c:1170 __do_sys_splice fs/splice.c:1447 [inline] __se_sys_splice fs/splice.c:1427 [inline] __x64_sys_splice+0x2c6/0x330 fs/splice.c:1427 do_syscall_64+0xfa/0x790 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe RIP: 0033:0x45c6c9 Code: ad b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00 00 66 90 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 0f 83 7b b6 fb ff c3 66 2e 0f 1f 84 00 00 00 00 RSP: 002b:00007ffa3925ac78 EFLAGS: 00000246 ORIG_RAX: 0000000000000113 RAX: ffffffffffffffda RBX: 00007ffa3925b6d4 RCX: 000000000045c6c9 RDX: 0000000000000005 RSI: 0000000000000000 RDI: 0000000000000004 RBP: 000000000076bf20 R08: 0000000100000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 00000000ffffffff R13: 0000000000000b6e R14: 00000000004ce22b R15: 000000000076bf2c
Time | Kernel | Commit | Syzkaller | Config | Log | Report | Syz repro | C repro | VM info | Assets (help?) | Manager | Title |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2020/02/15 17:31 | linux-next | 9f01828e9e16 | 5d7b90f1 | .config | console log | report | ci-upstream-linux-next-kasan-gce-root |