bisecting cause commit starting from bef7b2a7be28638770972ab2709adf11d601c11a building syzkaller on 5ed396e666c7826bed46f06c4db1409376691fed testing commit bef7b2a7be28638770972ab2709adf11d601c11a with gcc (GCC) 8.1.0 kernel signature: 9a822e69550dc806a97eaf9af7653e4d7ddc82be1f5805bb2393ee2854ee2e2b all runs: crashed: possible deadlock in send_sigio testing release v5.6 testing commit 7111951b8d4973bda27ff663f2cf18b663d15b48 with gcc (GCC) 8.1.0 kernel signature: 0c3030948c1286680193ba06ba0efae840b760096038cd4389d9ccbf232863bd all runs: OK # git bisect start bef7b2a7be28638770972ab2709adf11d601c11a 7111951b8d4973bda27ff663f2cf18b663d15b48 Bisecting: 3821 revisions left to test after this (roughly 12 steps) [29d9f30d4ce6c7a38745a54a8cddface10013490] Merge git://git.kernel.org/pub/scm/linux/kernel/git/netdev/net-next testing commit 29d9f30d4ce6c7a38745a54a8cddface10013490 with gcc (GCC) 8.1.0 kernel signature: 5289f2c8c2d50617d6c427b07efea6facff1a6ee3c668b4c73f165a1fc77450b all runs: OK # git bisect good 29d9f30d4ce6c7a38745a54a8cddface10013490 Bisecting: 1948 revisions left to test after this (roughly 11 steps) [50a5de895dbe5df947b3a695777db5b2c313e065] Merge tag 'for-linus-hmm' of git://git.kernel.org/pub/scm/linux/kernel/git/rdma/rdma testing commit 50a5de895dbe5df947b3a695777db5b2c313e065 with gcc (GCC) 8.1.0 kernel signature: d308c5c900ec563631d30f5337eb1565a97559fee5cd2ee69ee206c4406eb47b all runs: OK # git bisect good 50a5de895dbe5df947b3a695777db5b2c313e065 Bisecting: 1022 revisions left to test after this (roughly 10 steps) [bc3b3f4bfbded031a11c4284106adddbfacd05bb] Merge tag 'pinctrl-v5.7-1' of git://git.kernel.org/pub/scm/linux/kernel/git/linusw/linux-pinctrl testing commit bc3b3f4bfbded031a11c4284106adddbfacd05bb with gcc (GCC) 8.1.0 kernel signature: e8aa0e156ca117813e6ccd9d99b3eb4986e45f160e9092bde31bc64adc48527c all runs: crashed: possible deadlock in send_sigio # git bisect bad bc3b3f4bfbded031a11c4284106adddbfacd05bb Bisecting: 406 revisions left to test after this (roughly 9 steps) [6cad420cc695867b4ca710bac21fde21a4102e4b] Merge branch 'akpm' (patches from Andrew) testing commit 6cad420cc695867b4ca710bac21fde21a4102e4b with gcc (GCC) 8.1.0 kernel signature: f62de7a7cc89e2b945d2aa83b8bd8f3045d9f6c879a2dcf0f6a532738de0a156 all runs: crashed: possible deadlock in send_sigio # git bisect bad 6cad420cc695867b4ca710bac21fde21a4102e4b Bisecting: 264 revisions left to test after this (roughly 8 steps) [7db83c070bd29e73c8bb42d4b48c976be76f1dbe] Merge tag 'vfs-5.7-merge-1' of git://git.kernel.org/pub/scm/fs/xfs/xfs-linux testing commit 7db83c070bd29e73c8bb42d4b48c976be76f1dbe with gcc (GCC) 8.1.0 kernel signature: 0aca0416598ae2d8b7f02185a9e52ab7162349fcc7ea25c2ae565e5af38b79d3 all runs: crashed: possible deadlock in send_sigio # git bisect bad 7db83c070bd29e73c8bb42d4b48c976be76f1dbe Bisecting: 126 revisions left to test after this (roughly 7 steps) [fd72926c332eaa28845b1f655b24006158ec5207] RDMA/hns: Adjust the qp status value sequence of the hardware testing commit fd72926c332eaa28845b1f655b24006158ec5207 with gcc (GCC) 8.1.0 kernel signature: 7f45c4e571c60946724ca9e6e355b42ba74eeecff870160f066d3e1b0cbd0707 all runs: OK # git bisect good fd72926c332eaa28845b1f655b24006158ec5207 Bisecting: 63 revisions left to test after this (roughly 6 steps) [c5971b8c6354a95c9ee7eb722928af5000bac247] take post-lookup part of do_last() out of loop testing commit c5971b8c6354a95c9ee7eb722928af5000bac247 with gcc (GCC) 8.1.0 kernel signature: 82bef3aa5889e3e2a544e58b00d056137aa7ef35b3f36d4369cd28633fa0c0f3 all runs: OK # git bisect good c5971b8c6354a95c9ee7eb722928af5000bac247 Bisecting: 35 revisions left to test after this (roughly 5 steps) [d1e7fd6462ca9fc76650fbe6ca800e35b24267da] signal: Extend exec_id to 64bits testing commit d1e7fd6462ca9fc76650fbe6ca800e35b24267da with gcc (GCC) 8.1.0 kernel signature: bcb38508f74fd80e6ae7f079edab103c3dc078c32eb7bdb412c25f8a9f7bbdfb run #0: crashed: possible deadlock in send_sigio run #1: crashed: possible deadlock in send_sigio run #2: crashed: possible deadlock in send_sigio run #3: crashed: possible deadlock in send_sigio run #4: crashed: possible deadlock in send_sigio run #5: crashed: possible deadlock in send_sigio run #6: crashed: possible deadlock in send_sigio run #7: crashed: possible deadlock in send_sigio run #8: OK run #9: boot failed: can't ssh into the instance # git bisect bad d1e7fd6462ca9fc76650fbe6ca800e35b24267da Bisecting: 13 revisions left to test after this (roughly 4 steps) [021691559245498dfa15454c9fc4351f367d0b7f] exec: Factor unshare_sighand out of de_thread and call it separately testing commit 021691559245498dfa15454c9fc4351f367d0b7f with gcc (GCC) 8.1.0 kernel signature: f6acaadf985beb17bb3e7cf18d5abb79ed803e98d69dc726e11cfd96161f63e0 all runs: crashed: possible deadlock in send_sigio # git bisect bad 021691559245498dfa15454c9fc4351f367d0b7f Bisecting: 6 revisions left to test after this (roughly 3 steps) [a13ae6971599dd01a5fa8da9ee1bd5bb3efa01b3] proc: Dentry flushing without proc_mnt testing commit a13ae6971599dd01a5fa8da9ee1bd5bb3efa01b3 with gcc (GCC) 8.1.0 kernel signature: 8f6eccbd1936717147f093ef77b375e46f6b48b88b1c57cd626cab8f2d7df2aa all runs: crashed: possible deadlock in send_sigio # git bisect bad a13ae6971599dd01a5fa8da9ee1bd5bb3efa01b3 Bisecting: 3 revisions left to test after this (roughly 2 steps) [080f6276fccfb3923691e57c1b44a627eabd1a25] proc: In proc_prune_siblings_dcache cache an aquired super block testing commit 080f6276fccfb3923691e57c1b44a627eabd1a25 with gcc (GCC) 8.1.0 kernel signature: 55f532d8fab6156e5b829b279af6da82d809925849b317b5fbbbe4c8cdbb1d1d all runs: OK # git bisect good 080f6276fccfb3923691e57c1b44a627eabd1a25 Bisecting: 1 revision left to test after this (roughly 1 step) [71448011ea2a1cd36d8f5cbdab0ed716c454d565] proc: Clear the pieces of proc_inode that proc_evict_inode cares about testing commit 71448011ea2a1cd36d8f5cbdab0ed716c454d565 with gcc (GCC) 8.1.0 kernel signature: d99be6b8d7764cf7127acc45edbbecc23d464121d89559d57973abd6897a59da all runs: OK # git bisect good 71448011ea2a1cd36d8f5cbdab0ed716c454d565 Bisecting: 0 revisions left to test after this (roughly 0 steps) [7bc3e6e55acf065500a24621f3b313e7e5998acf] proc: Use a list of inodes to flush from proc testing commit 7bc3e6e55acf065500a24621f3b313e7e5998acf with gcc (GCC) 8.1.0 kernel signature: dc556607631cda35eebfb432a0cae1d5a3786df9abc8db738859d6595ce6e951 all runs: crashed: possible deadlock in send_sigio # git bisect bad 7bc3e6e55acf065500a24621f3b313e7e5998acf 7bc3e6e55acf065500a24621f3b313e7e5998acf is the first bad commit commit 7bc3e6e55acf065500a24621f3b313e7e5998acf Author: Eric W. Biederman Date: Wed Feb 19 18:22:26 2020 -0600 proc: Use a list of inodes to flush from proc Rework the flushing of proc to use a list of directory inodes that need to be flushed. The list is kept on struct pid not on struct task_struct, as there is a fixed connection between proc inodes and pids but at least for the case of de_thread the pid of a task_struct changes. This removes the dependency on proc_mnt which allows for different mounts of proc having different mount options even in the same pid namespace and this allows for the removal of proc_mnt which will trivially the first mount of proc to honor it's mount options. This flushing remains an optimization. The functions pid_delete_dentry and pid_revalidate ensure that ordinary dcache management will not attempt to use dentries past the point their respective task has died. When unused the shrinker will eventually be able to remove these dentries. There is a case in de_thread where proc_flush_pid can be called early for a given pid. Which winds up being safe (if suboptimal) as this is just an optiimization. Only pid directories are put on the list as the other per pid files are children of those directories and d_invalidate on the directory will get them as well. So that the pid can be used during flushing it's reference count is taken in release_task and dropped in proc_flush_pid. Further the call of proc_flush_pid is moved after the tasklist_lock is released in release_task so that it is certain that the pid has already been unhashed when flushing it taking place. This removes a small race where a dentry could recreated. As struct pid is supposed to be small and I need a per pid lock I reuse the only lock that currently exists in struct pid the the wait_pidfd.lock. The net result is that this adds all of this functionality with just a little extra list management overhead and a single extra pointer in struct pid. v2: Initialize pid->inodes. I somehow failed to get that initialization into the initial version of the patch. A boot failure was reported by "kernel test robot ", and failure to initialize that pid->inodes matches all of the reported symptoms. Signed-off-by: Eric W. Biederman fs/proc/base.c | 111 ++++++++++++++++-------------------------------- fs/proc/inode.c | 2 +- fs/proc/internal.h | 1 + include/linux/pid.h | 1 + include/linux/proc_fs.h | 4 +- kernel/exit.c | 4 +- kernel/pid.c | 1 + 7 files changed, 45 insertions(+), 79 deletions(-) culprit signature: dc556607631cda35eebfb432a0cae1d5a3786df9abc8db738859d6595ce6e951 parent signature: d99be6b8d7764cf7127acc45edbbecc23d464121d89559d57973abd6897a59da revisions tested: 15, total time: 3h57m50.120166307s (build: 1h33m54.577152375s, test: 2h22m52.159878481s) first bad commit: 7bc3e6e55acf065500a24621f3b313e7e5998acf proc: Use a list of inodes to flush from proc cc: ["adobriyan@gmail.com" "akpm@linux-foundation.org" "allison@lohutok.net" "areber@redhat.com" "aubrey.li@linux.intel.com" "avagin@gmail.com" "christian@brauner.io" "cyphar@cyphar.com" "ebiederm@xmission.com" "gregkh@linuxfoundation.org" "guro@fb.com" "joel@joelfernandes.org" "keescook@chromium.org" "linmiaohe@huawei.com" "linux-fsdevel@vger.kernel.org" "linux-kernel@vger.kernel.org" "mhocko@suse.com" "mingo@kernel.org" "oleg@redhat.com" "peterz@infradead.org" "sargun@sargun.me" "tglx@linutronix.de" "viro@zeniv.linux.org.uk"] crash: possible deadlock in send_sigio ======================================================== WARNING: possible irq lock inversion dependency detected 5.6.0-rc2-syzkaller #0 Not tainted -------------------------------------------------------- ksoftirqd/1/16 just changed the state of lock: ffffffff88a090d8 (tasklist_lock){.+.?}, at: send_sigio+0x8a/0x270 fs/fcntl.c:798 but this lock took another, SOFTIRQ-unsafe lock in the past: (&pid->wait_pidfd){+.+.} and interrupts could create inverse lock ordering between them. other info that might help us debug this: Possible interrupt unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&pid->wait_pidfd); local_irq_disable(); lock(tasklist_lock); lock(&pid->wait_pidfd); lock(tasklist_lock); *** DEADLOCK *** 8 locks held by ksoftirqd/1/16: #0: ffffffff88ba5600 (rcu_read_lock){....}, at: __write_once_size include/linux/compiler.h:226 [inline] #0: ffffffff88ba5600 (rcu_read_lock){....}, at: __skb_unlink include/linux/skbuff.h:2046 [inline] #0: ffffffff88ba5600 (rcu_read_lock){....}, at: __skb_dequeue include/linux/skbuff.h:2061 [inline] #0: ffffffff88ba5600 (rcu_read_lock){....}, at: process_backlog+0x1ab/0x710 net/core/dev.c:6142 #1: ffffffff88ba5600 (rcu_read_lock){....}, at: __skb_pull include/linux/skbuff.h:2277 [inline] #1: ffffffff88ba5600 (rcu_read_lock){....}, at: ip_local_deliver_finish+0x11b/0x2f0 net/ipv4/ip_input.c:228 #2: ffff8880a32dcee0 (slock-AF_INET/1){+.-.}, at: tcp_v4_rcv+0x25e3/0x34c0 net/ipv4/tcp_ipv4.c:1995 #3: ffffffff88ba5600 (rcu_read_lock){....}, at: sock_def_error_report+0x0/0x350 include/linux/compiler.h:199 #4: ffffffff88ba5600 (rcu_read_lock){....}, at: rcu_lock_release include/linux/rcupdate.h:213 [inline] #4: ffffffff88ba5600 (rcu_read_lock){....}, at: rcu_read_unlock include/linux/rcupdate.h:655 [inline] #4: ffffffff88ba5600 (rcu_read_lock){....}, at: sock_def_error_report+0x14a/0x350 net/core/sock.c:2786 #5: ffffffff88ba5600 (rcu_read_lock){....}, at: kill_fasync+0x3b/0x380 fs/fcntl.c:1019 #6: ffff8880a20e6e18 (&new->fa_lock){.+.?}, at: kill_fasync_rcu fs/fcntl.c:1000 [inline] #6: ffff8880a20e6e18 (&new->fa_lock){.+.?}, at: kill_fasync+0x121/0x380 fs/fcntl.c:1021 #7: ffff8880a7463420 (&f->f_owner.lock){.+.?}, at: send_sigio+0x1f/0x270 fs/fcntl.c:784 the shortest dependencies between 2nd lock and 1st lock: -> (&pid->wait_pidfd){+.+.} { HARDIRQ-ON-W at: lock_acquire+0x19b/0x420 kernel/locking/lockdep.c:4484 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:338 [inline] proc_pid_make_inode+0x1c0/0x390 fs/proc/base.c:1880 proc_pid_instantiate+0x40/0x140 fs/proc/base.c:3285 proc_pid_lookup+0x134/0x240 fs/proc/base.c:3320 proc_root_lookup+0x16/0x40 fs/proc/root.c:243 __lookup_slow+0x204/0x3d0 fs/namei.c:1757 lookup_slow fs/namei.c:1774 [inline] walk_component+0x684/0xef0 fs/namei.c:1915 link_path_walk.part.40+0x3c4/0xdc0 fs/namei.c:2238 link_path_walk fs/namei.c:2172 [inline] path_openat+0x194/0x2aa0 fs/namei.c:3606 do_filp_open+0x171/0x240 fs/namei.c:3637 do_sys_openat2+0x2b9/0x480 fs/open.c:1149 do_sys_open+0x85/0xd0 fs/open.c:1165 do_syscall_64+0xc6/0x5e0 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe SOFTIRQ-ON-W at: lock_acquire+0x19b/0x420 kernel/locking/lockdep.c:4484 __raw_spin_lock include/linux/spinlock_api_smp.h:142 [inline] _raw_spin_lock+0x2a/0x40 kernel/locking/spinlock.c:151 spin_lock include/linux/spinlock.h:338 [inline] proc_pid_make_inode+0x1c0/0x390 fs/proc/base.c:1880 proc_pid_instantiate+0x40/0x140 fs/proc/base.c:3285 proc_pid_lookup+0x134/0x240 fs/proc/base.c:3320 proc_root_lookup+0x16/0x40 fs/proc/root.c:243 __lookup_slow+0x204/0x3d0 fs/namei.c:1757 lookup_slow fs/namei.c:1774 [inline] walk_component+0x684/0xef0 fs/namei.c:1915 link_path_walk.part.40+0x3c4/0xdc0 fs/namei.c:2238 link_path_walk fs/namei.c:2172 [inline] path_openat+0x194/0x2aa0 fs/namei.c:3606 do_filp_open+0x171/0x240 fs/namei.c:3637 do_sys_openat2+0x2b9/0x480 fs/open.c:1149 do_sys_open+0x85/0xd0 fs/open.c:1165 do_syscall_64+0xc6/0x5e0 arch/x86/entry/common.c:294 entry_SYSCALL_64_after_hwframe+0x49/0xbe INITIAL USE at: lock_acquire+0x19b/0x420 kernel/locking/lockdep.c:4484 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x95/0xc0 kernel/locking/spinlock.c:159 __wake_up_common_lock+0xa8/0x120 kernel/sched/wait.c:122 do_notify_pidfd kernel/signal.c:1895 [inline] do_notify_parent+0x14c/0xbb0 kernel/signal.c:1922 exit_notify kernel/exit.c:668 [inline] do_exit+0x206f/0x2a10 kernel/exit.c:824 call_usermodehelper_exec_async+0x47f/0x680 kernel/umh.c:125 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352 } ... key at: [] __key.53243+0x0/0x40 ... acquired at: __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline] _raw_spin_lock_irqsave+0x95/0xc0 kernel/locking/spinlock.c:159 __wake_up_common_lock+0xa8/0x120 kernel/sched/wait.c:122 do_notify_pidfd kernel/signal.c:1895 [inline] do_notify_parent+0x14c/0xbb0 kernel/signal.c:1922 exit_notify kernel/exit.c:668 [inline] do_exit+0x206f/0x2a10 kernel/exit.c:824 call_usermodehelper_exec_async+0x47f/0x680 kernel/umh.c:125 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352 -> (tasklist_lock){.+.?} { HARDIRQ-ON-R at: lock_acquire+0x19b/0x420 kernel/locking/lockdep.c:4484 __raw_read_lock include/linux/rwlock_api_smp.h:149 [inline] _raw_read_lock+0x2d/0x40 kernel/locking/spinlock.c:223 do_wait+0x364/0x840 kernel/exit.c:1444 kernel_wait4+0xdf/0x1b0 kernel/exit.c:1619 call_usermodehelper_exec_sync kernel/umh.c:150 [inline] call_usermodehelper_exec_work+0x134/0x210 kernel/umh.c:187 process_one_work+0x903/0x15c0 kernel/workqueue.c:2264 worker_thread+0x82/0xb50 kernel/workqueue.c:2410 kthread+0x31d/0x3e0 kernel/kthread.c:255 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352 IN-SOFTIRQ-R at: lock_acquire+0x19b/0x420 kernel/locking/lockdep.c:4484 __raw_read_lock include/linux/rwlock_api_smp.h:149 [inline] _raw_read_lock+0x2d/0x40 kernel/locking/spinlock.c:223 send_sigio+0x8a/0x270 fs/fcntl.c:798 kill_fasync_rcu fs/fcntl.c:1007 [inline] kill_fasync+0x1b6/0x380 fs/fcntl.c:1021 sock_wake_async+0x85/0x110 net/socket.c:1337 sk_wake_async include/net/sock.h:2200 [inline] sock_def_error_report+0x1df/0x350 net/core/sock.c:2785 tcp_rcv_synsent_state_process net/ipv4/tcp_input.c:5937 [inline] tcp_rcv_state_process+0x2c46/0x4b20 net/ipv4/tcp_input.c:6200 tcp_v4_do_rcv+0x2b7/0x790 net/ipv4/tcp_ipv4.c:1641 tcp_v4_rcv+0x2825/0x34c0 net/ipv4/tcp_ipv4.c:2001 ip_protocol_deliver_rcu+0x53/0x690 net/ipv4/ip_input.c:204 ip_local_deliver_finish+0x200/0x2f0 net/ipv4/ip_input.c:231 NF_HOOK include/linux/netfilter.h:307 [inline] ip_local_deliver+0x2e5/0x3e0 net/ipv4/ip_input.c:252 NF_HOOK include/linux/netfilter.h:307 [inline] ip_rcv+0xc9/0x2e0 net/ipv4/ip_input.c:538 __netif_receive_skb_one_core+0xe3/0x150 net/core/dev.c:5198 process_backlog+0x1f2/0x710 net/core/dev.c:6144 napi_poll net/core/dev.c:6582 [inline] net_rx_action+0x415/0xd70 net/core/dev.c:6650 __do_softirq+0x26e/0x9b2 kernel/softirq.c:292 run_ksoftirqd+0x8f/0x100 kernel/softirq.c:603 smpboot_thread_fn+0x511/0x850 kernel/smpboot.c:165 kthread+0x31d/0x3e0 kernel/kthread.c:255 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352 SOFTIRQ-ON-R at: lock_acquire+0x19b/0x420 kernel/locking/lockdep.c:4484 __raw_read_lock include/linux/rwlock_api_smp.h:149 [inline] _raw_read_lock+0x2d/0x40 kernel/locking/spinlock.c:223 do_wait+0x364/0x840 kernel/exit.c:1444 kernel_wait4+0xdf/0x1b0 kernel/exit.c:1619 call_usermodehelper_exec_sync kernel/umh.c:150 [inline] call_usermodehelper_exec_work+0x134/0x210 kernel/umh.c:187 process_one_work+0x903/0x15c0 kernel/workqueue.c:2264 worker_thread+0x82/0xb50 kernel/workqueue.c:2410 kthread+0x31d/0x3e0 kernel/kthread.c:255 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352 INITIAL USE at: lock_acquire+0x19b/0x420 kernel/locking/lockdep.c:4484 __raw_write_lock_irq include/linux/rwlock_api_smp.h:196 [inline] _raw_write_lock_irq+0x5e/0x80 kernel/locking/spinlock.c:311 copy_process+0x35da/0x6a50 kernel/fork.c:2203 _do_fork+0xf8/0xc00 kernel/fork.c:2430 kernel_thread+0x98/0xd0 kernel/fork.c:2517 rest_init+0x21/0x26e init/main.c:620 start_kernel+0x6c1/0x6ff init/main.c:992 secondary_startup_64+0xa4/0xb0 arch/x86/kernel/head_64.S:242 } ... key at: [] tasklist_lock+0x18/0x40 ... acquired at: mark_lock_irq kernel/locking/lockdep.c:3316 [inline] mark_lock+0x501/0x11a0 kernel/locking/lockdep.c:3665 mark_usage kernel/locking/lockdep.c:3557 [inline] __lock_acquire+0x145e/0x4370 kernel/locking/lockdep.c:3908 lock_acquire+0x19b/0x420 kernel/locking/lockdep.c:4484 __raw_read_lock include/linux/rwlock_api_smp.h:149 [inline] _raw_read_lock+0x2d/0x40 kernel/locking/spinlock.c:223 send_sigio+0x8a/0x270 fs/fcntl.c:798 kill_fasync_rcu fs/fcntl.c:1007 [inline] kill_fasync+0x1b6/0x380 fs/fcntl.c:1021 sock_wake_async+0x85/0x110 net/socket.c:1337 sk_wake_async include/net/sock.h:2200 [inline] sock_def_error_report+0x1df/0x350 net/core/sock.c:2785 tcp_rcv_synsent_state_process net/ipv4/tcp_input.c:5937 [inline] tcp_rcv_state_process+0x2c46/0x4b20 net/ipv4/tcp_input.c:6200 tcp_v4_do_rcv+0x2b7/0x790 net/ipv4/tcp_ipv4.c:1641 tcp_v4_rcv+0x2825/0x34c0 net/ipv4/tcp_ipv4.c:2001 ip_protocol_deliver_rcu+0x53/0x690 net/ipv4/ip_input.c:204 ip_local_deliver_finish+0x200/0x2f0 net/ipv4/ip_input.c:231 NF_HOOK include/linux/netfilter.h:307 [inline] ip_local_deliver+0x2e5/0x3e0 net/ipv4/ip_input.c:252 NF_HOOK include/linux/netfilter.h:307 [inline] ip_rcv+0xc9/0x2e0 net/ipv4/ip_input.c:538 __netif_receive_skb_one_core+0xe3/0x150 net/core/dev.c:5198 process_backlog+0x1f2/0x710 net/core/dev.c:6144 napi_poll net/core/dev.c:6582 [inline] net_rx_action+0x415/0xd70 net/core/dev.c:6650 __do_softirq+0x26e/0x9b2 kernel/softirq.c:292 run_ksoftirqd+0x8f/0x100 kernel/softirq.c:603 smpboot_thread_fn+0x511/0x850 kernel/smpboot.c:165 kthread+0x31d/0x3e0 kernel/kthread.c:255 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352 stack backtrace: CPU: 1 PID: 16 Comm: ksoftirqd/1 Not tainted 5.6.0-rc2-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/01/2011 Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack+0x128/0x182 lib/dump_stack.c:118 print_irq_inversion_bug kernel/locking/lockdep.c:3179 [inline] check_usage_forwards.cold.61+0x20/0x29 kernel/locking/lockdep.c:3203 mark_lock_irq kernel/locking/lockdep.c:3316 [inline] mark_lock+0x501/0x11a0 kernel/locking/lockdep.c:3665 mark_usage kernel/locking/lockdep.c:3557 [inline] __lock_acquire+0x145e/0x4370 kernel/locking/lockdep.c:3908 lock_acquire+0x19b/0x420 kernel/locking/lockdep.c:4484 __raw_read_lock include/linux/rwlock_api_smp.h:149 [inline] _raw_read_lock+0x2d/0x40 kernel/locking/spinlock.c:223 send_sigio+0x8a/0x270 fs/fcntl.c:798 kill_fasync_rcu fs/fcntl.c:1007 [inline] kill_fasync+0x1b6/0x380 fs/fcntl.c:1021 sock_wake_async+0x85/0x110 net/socket.c:1337 sk_wake_async include/net/sock.h:2200 [inline] sock_def_error_report+0x1df/0x350 net/core/sock.c:2785 tcp_rcv_synsent_state_process net/ipv4/tcp_input.c:5937 [inline] tcp_rcv_state_process+0x2c46/0x4b20 net/ipv4/tcp_input.c:6200 tcp_v4_do_rcv+0x2b7/0x790 net/ipv4/tcp_ipv4.c:1641 tcp_v4_rcv+0x2825/0x34c0 net/ipv4/tcp_ipv4.c:2001 ip_protocol_deliver_rcu+0x53/0x690 net/ipv4/ip_input.c:204 ip_local_deliver_finish+0x200/0x2f0 net/ipv4/ip_input.c:231 NF_HOOK include/linux/netfilter.h:307 [inline] ip_local_deliver+0x2e5/0x3e0 net/ipv4/ip_input.c:252 NF_HOOK include/linux/netfilter.h:307 [inline] ip_rcv+0xc9/0x2e0 net/ipv4/ip_input.c:538 __netif_receive_skb_one_core+0xe3/0x150 net/core/dev.c:5198 process_backlog+0x1f2/0x710 net/core/dev.c:6144 napi_poll net/core/dev.c:6582 [inline] net_rx_action+0x415/0xd70 net/core/dev.c:6650 __do_softirq+0x26e/0x9b2 kernel/softirq.c:292 run_ksoftirqd+0x8f/0x100 kernel/softirq.c:603 smpboot_thread_fn+0x511/0x850 kernel/smpboot.c:165 kthread+0x31d/0x3e0 kernel/kthread.c:255 ret_from_fork+0x24/0x30 arch/x86/entry/entry_64.S:352