syzbot


INFO: task hung in __rpc_execute (2)

Status: auto-obsoleted due to no activity on 2025/04/22 16:32
Subsystems: net nfs
[Documentation on labels]
First crash: 192d, last: 192d
Similar bugs (1)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in __rpc_execute net nfs 1 15 295d 394d 0/29 auto-obsoleted due to no activity on 2025/01/09 14:47

Sample crash report:
INFO: task syz.1.15428:6452 blocked for more than 143 seconds.
      Not tainted 6.13.0-syzkaller-02526-gc4b9570cfb63 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.15428     state:R  running task     stack:19936 pid:6452  tgid:6449  ppid:5817   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5373 [inline]
 __schedule+0x181a/0x4b90 kernel/sched/core.c:6760
 __schedule_loop kernel/sched/core.c:6837 [inline]
 schedule+0x14b/0x320 kernel/sched/core.c:6852
 rpc_wait_bit_killable+0x1b/0x160 net/sunrpc/sched.c:279
 __wait_on_bit+0xb0/0x2f0 kernel/sched/wait_bit.c:49
 out_of_line_wait_on_bit+0x1d5/0x260 kernel/sched/wait_bit.c:64
 __rpc_execute+0x723/0x1420 net/sunrpc/sched.c:987
 rpc_execute+0x1f5/0x3f0 net/sunrpc/sched.c:1025
 rpc_run_task+0x562/0x6c0 net/sunrpc/clnt.c:1245
 rpc_call_sync+0x197/0x2e0 net/sunrpc/clnt.c:1274
 rpcb_register_call net/sunrpc/rpcb_clnt.c:412 [inline]
 rpcb_register+0x36b/0x670 net/sunrpc/rpcb_clnt.c:476
 __svc_unregister net/sunrpc/svc.c:1201 [inline]
 svc_unregister+0x25d/0x7b0 net/sunrpc/svc.c:1230
 svc_rpcb_setup net/sunrpc/svc.c:430 [inline]
 svc_bind+0x1ea/0x230 net/sunrpc/svc.c:463
 nfsd_create_serv+0x715/0xc30 fs/nfsd/nfssvc.c:672
 nfsd_nl_listener_set_doit+0x135/0x1a90 fs/nfsd/nfsctl.c:1966
 genl_family_rcv_msg_doit net/netlink/genetlink.c:1115 [inline]
 genl_family_rcv_msg net/netlink/genetlink.c:1195 [inline]
 genl_rcv_msg+0xb14/0xec0 net/netlink/genetlink.c:1210
 netlink_rcv_skb+0x1e3/0x430 net/netlink/af_netlink.c:2542
 genl_rcv+0x28/0x40 net/netlink/genetlink.c:1219
 netlink_unicast_kernel net/netlink/af_netlink.c:1321 [inline]
 netlink_unicast+0x7f6/0x990 net/netlink/af_netlink.c:1347
 netlink_sendmsg+0x8e4/0xcb0 net/netlink/af_netlink.c:1891
 sock_sendmsg_nosec net/socket.c:711 [inline]
 __sock_sendmsg+0x221/0x270 net/socket.c:726
 ____sys_sendmsg+0x52a/0x7e0 net/socket.c:2583
 ___sys_sendmsg net/socket.c:2637 [inline]
 __sys_sendmsg+0x269/0x350 net/socket.c:2669
 do_syscall_x64 arch/x86/entry/common.c:52 [inline]
 do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f778b985d29
RSP: 002b:00007f778c79d038 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007f778bb75fa0 RCX: 00007f778b985d29
RDX: 0000000000000000 RSI: 0000000020000040 RDI: 0000000000000004
RBP: 00007f778ba01b08 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f778bb75fa0 R15: 00007ffc0fb2dad8
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/30:
 #0: ffffffff8e93a3e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline]
 #0: ffffffff8e93a3e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline]
 #0: ffffffff8e93a3e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6746
4 locks held by kworker/u8:5/1157:
 #0: ffff8880b873e798 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:598
 #1: ffff8880b8728948 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x41d/0x7a0 kernel/sched/psi.c:987
 #2: ffff8880b872a398 (&base->lock){-.-.}-{2:2}, at: lock_timer_base kernel/time/timer.c:1046 [inline]
 #2: ffff8880b872a398 (&base->lock){-.-.}-{2:2}, at: __mod_timer+0x24a/0x10e0 kernel/time/timer.c:1127
 #3: ffffffff9a5f64c8 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x17f/0x580 lib/debugobjects.c:818
2 locks held by getty/5572:
 #0: ffff8880351920a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90002fde2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211
5 locks held by kworker/0:0/26908:
 #0: ffff8880226d9148 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3211 [inline]
 #0: ffff8880226d9148 ((wq_completion)usb_hub_wq){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3317
 #1: ffffc90010517c60 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3212 [inline]
 #1: ffffc90010517c60 ((work_completion)(&hub->events)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3317
 #2: ffff88802a5b6190 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:1014 [inline]
 #2: ffff88802a5b6190 (&dev->mutex){....}-{4:4}, at: hub_event+0x1fe/0x5150 drivers/usb/core/hub.c:5851
 #3: ffff88802a669510 (&port_dev->status_lock){+.+.}-{4:4}, at: usb_lock_port drivers/usb/core/hub.c:3208 [inline]
 #3: ffff88802a669510 (&port_dev->status_lock){+.+.}-{4:4}, at: hub_port_connect drivers/usb/core/hub.c:5420 [inline]
 #3: ffff88802a669510 (&port_dev->status_lock){+.+.}-{4:4}, at: hub_port_connect_change drivers/usb/core/hub.c:5663 [inline]
 #3: ffff88802a669510 (&port_dev->status_lock){+.+.}-{4:4}, at: port_event drivers/usb/core/hub.c:5823 [inline]
 #3: ffff88802a669510 (&port_dev->status_lock){+.+.}-{4:4}, at: hub_event+0x25b9/0x5150 drivers/usb/core/hub.c:5905
 #4: ffff8880285d4668 (hcd->address0_mutex){+.+.}-{4:4}, at: hub_port_connect drivers/usb/core/hub.c:5421 [inline]
 #4: ffff8880285d4668 (hcd->address0_mutex){+.+.}-{4:4}, at: hub_port_connect_change drivers/usb/core/hub.c:5663 [inline]
 #4: ffff8880285d4668 (hcd->address0_mutex){+.+.}-{4:4}, at: port_event drivers/usb/core/hub.c:5823 [inline]
 #4: ffff8880285d4668 (hcd->address0_mutex){+.+.}-{4:4}, at: hub_event+0x25f7/0x5150 drivers/usb/core/hub.c:5905
2 locks held by syz.0.12490/32343:
 #0: ffffffff8fcb1888 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:698 [inline]
 #0: ffffffff8fcb1888 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x3b/0x1b0 drivers/net/tun.c:3517
 #1: 
ffffffff8e93f8b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:302 [inline]
ffffffff8e93f8b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x381/0x830 kernel/rcu/tree_exp.h:996
2 locks held by syz.1.15428/6452:
 #0: ffffffff8fd14970 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8ec05c08 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_set_doit+0x12d/0x1a90 fs/nfsd/nfsctl.c:1964

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.13.0-syzkaller-02526-gc4b9570cfb63 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:234 [inline]
 watchdog+0xff6/0x1040 kernel/hung_task.c:397
 kthread+0x7a9/0x920 kernel/kthread.c:464
 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 32343 Comm: syz.0.12490 Not tainted 6.13.0-syzkaller-02526-gc4b9570cfb63 #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 12/27/2024
RIP: 0010:__bfs kernel/locking/lockdep.c:1813 [inline]
RIP: 0010:__bfs_backwards kernel/locking/lockdep.c:1858 [inline]
RIP: 0010:check_irq_usage kernel/locking/lockdep.c:2829 [inline]
RIP: 0010:check_prev_add kernel/locking/lockdep.c:3167 [inline]
RIP: 0010:check_prevs_add kernel/locking/lockdep.c:3282 [inline]
RIP: 0010:validate_chain+0x1fc2/0x5920 kernel/locking/lockdep.c:3906
Code: 80 3c 20 00 74 08 4c 89 f7 e8 aa b9 87 00 48 09 9c 24 98 00 00 00 4d 8b 3e 4d 39 f7 0f 84 6e fc ff ff 41 b4 01 eb 0c 4d 8b 3f <4d> 39 f7 0f 84 5d fc ff ff 49 8d 5f 30 48 89 d8 48 c1 e8 03 48 b9
RSP: 0018:ffffc9000ea16b60 EFLAGS: 00000046
RAX: 1ffffffff2dbabdf RBX: ffffffff96dd5f28 RCX: dffffc0000000000
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff93c78378
RBP: ffffc9000ea16e60 R08: ffffffff9429c84f R09: 1ffffffff2853909
R10: dffffc0000000000 R11: fffffbfff285390a R12: 0000000000000000
R13: ffffffff96c52418 R14: ffffffff93c78348 R15: ffffffff93c78348
FS:  0000000000000000(0000) GS:ffff8880b8600000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fdaa5547ab8 CR3: 000000000e738000 CR4: 00000000003526f0
DR0: 0000000000000000 DR1: 0000000000000000 DR2: 0000000000000000
DR3: 0000000000000000 DR6: 00000000fffe0ff0 DR7: 0000000000000400
Call Trace:
 <NMI>
 </NMI>
 <TASK>
 __lock_acquire+0x1397/0x2100 kernel/locking/lockdep.c:5228
 lock_acquire+0x1ed/0x550 kernel/locking/lockdep.c:5851
 do_write_seqcount_begin_nested include/linux/seqlock.h:476 [inline]
 do_write_seqcount_begin include/linux/seqlock.h:502 [inline]
 psi_account_irqtime+0x350/0x830 kernel/sched/psi.c:1026
 __schedule+0x927/0x4b90 kernel/sched/core.c:6753
 preempt_schedule_irq+0xfb/0x1c0 kernel/sched/core.c:7082
 irqentry_exit+0x5e/0x90 kernel/entry/common.c:354
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:queue_work_on+0x269/0x380 kernel/workqueue.c:2395
Code: 75 19 e8 ea ec 37 00 eb 18 e8 e3 ec 37 00 e8 0e bf 5f 0a 48 83 7c 24 10 00 74 e7 e8 d1 ec 37 00 fb 48 c7 44 24 20 0e 36 e0 45 <4b> c7 04 37 00 00 00 00 43 c7 44 37 09 00 00 00 00 66 43 c7 44 37
RSP: 0018:ffffc9000ea174c0 EFLAGS: 00000293
RAX: ffffffff81678e9f RBX: 0000000000000000 RCX: ffff888034c79e00
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000000
RBP: ffffc9000ea175b0 R08: ffffffff81678e6f R09: 1ffffffff2853908
R10: dffffc0000000000 R11: fffffbfff2853909 R12: 001fffffffc00001
R13: 0000000000000046 R14: 1ffff92001d42e9c R15: dffffc0000000000
 flush_all_backlogs net/core/dev.c:6068 [inline]
 unregister_netdevice_many_notify+0x793/0x1da0 net/core/dev.c:11526
 unregister_netdevice_many net/core/dev.c:11609 [inline]
 unregister_netdevice_queue+0x303/0x370 net/core/dev.c:11481
 unregister_netdevice include/linux/netdevice.h:3192 [inline]
 __tun_detach+0x6b9/0x1600 drivers/net/tun.c:685
 tun_detach drivers/net/tun.c:701 [inline]
 tun_chr_close+0x105/0x1b0 drivers/net/tun.c:3517
 __fput+0x23c/0xa50 fs/file_table.c:450
 task_work_run+0x24f/0x310 kernel/task_work.c:239
 exit_task_work include/linux/task_work.h:43 [inline]
 do_exit+0xa2a/0x28e0 kernel/exit.c:938
 do_group_exit+0x207/0x2c0 kernel/exit.c:1087
 get_signal+0x16b2/0x1750 kernel/signal.c:3036
 arch_do_signal_or_restart+0x96/0x860 arch/x86/kernel/signal.c:337
 exit_to_user_mode_loop kernel/entry/common.c:111 [inline]
 exit_to_user_mode_prepare include/linux/entry-common.h:329 [inline]
 __syscall_exit_to_user_mode_work kernel/entry/common.c:207 [inline]
 syscall_exit_to_user_mode+0xce/0x340 kernel/entry/common.c:218
 do_syscall_64+0x100/0x230 arch/x86/entry/common.c:89
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fcbd3585d29
Code: Unable to access opcode bytes at 0x7fcbd3585cff.
RSP: 002b:00007fcbd43ad038 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: 0000000000000014 RBX: 00007fcbd3775fa0 RCX: 00007fcbd3585d29
RDX: 0000000000000000 RSI: 0000000020000040 RDI: 0000000000000004
RBP: 00007fcbd3601b08 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007fcbd3775fa0 R15: 00007ffdbcc5b988
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/01/22 16:23 upstream c4b9570cfb63 25e17fd3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in __rpc_execute
* Struck through repros no longer work on HEAD.