syzbot


INFO: task hung in anon_pipe_write

Status: upstream: reported C repro on 2025/05/01 08:08
Subsystems: nfs netfs
[Documentation on labels]
Reported-by: syzbot+ef2c1c404cbcbcc66453@syzkaller.appspotmail.com
First crash: 193d, last: 38d
Cause bisection: introduced by (bisect log) :
commit 7ba167c4c73ed96eb002c98a9d7d49317dfb0191
Author: David Howells <dhowells@redhat.com>
Date: Mon Mar 18 16:57:31 2024 +0000

  netfs: Switch to using unsigned long long rather than loff_t

Crash: INFO: task hung in pipe_write (log)
Repro: C syz .config
  
Fix bisection: fixed by (bisect log) :
commit 290434474c332a2ba9c8499fe699c7f2e1153280
Author: Tingmao Wang <m@maowtm.org>
Date: Sun Apr 6 16:18:42 2025 +0000

  fs/9p: Refresh metadata in d_revalidate for uncached mode too

  
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] [netfs?] INFO: task hung in anon_pipe_write 0 (2) 2025/11/05 05:48
[syzbot] Monthly netfs report (Aug 2025) 0 (1) 2025/08/29 12:52
[syzbot] Monthly netfs report (Jun 2025) 0 (1) 2025/06/23 07:30
Last patch testing requests (10)
Created Duration User Patch Repo Result
2025/11/05 17:13 57m retest repro upstream OK log
2025/11/05 17:13 52m retest repro upstream OK log
2025/11/05 17:13 23m retest repro upstream OK log
2025/08/27 13:19 21m retest repro upstream report log
2025/08/27 13:19 22m retest repro upstream report log
2025/08/27 13:19 18m retest repro upstream report log
2025/06/06 02:50 29m retest repro upstream report log
2025/06/06 02:50 20m retest repro upstream report log
2025/06/06 02:50 21m retest repro upstream report log
2025/06/06 02:50 28m retest repro upstream report log
Fix bisection attempts (3)
Created Duration User Patch Repo Result
2025/11/04 23:42 6h04m bisect fix upstream OK (1) job log
2025/09/28 06:24 2h13m bisect fix upstream OK (0) job log log
2025/08/13 02:57 3h57m bisect fix upstream OK (0) job log log

Sample crash report:
INFO: task syz.4.1593:17955 blocked for more than 143 seconds.
      Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.1593      state:D stack:25352 pid:17955 tgid:17953 ppid:5855   task_flags:0x400140 flags:0x00004004
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5401 [inline]
 __schedule+0x16f5/0x4d00 kernel/sched/core.c:6790
 __schedule_loop kernel/sched/core.c:6868 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:6883
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6940
 __mutex_lock_common kernel/locking/mutex.c:679 [inline]
 __mutex_lock+0x724/0xe80 kernel/locking/mutex.c:747
 anon_pipe_write+0xbf4/0x1360 fs/pipe.c:572
 new_sync_write fs/read_write.c:593 [inline]
 vfs_write+0x54b/0xa90 fs/read_write.c:686
 ksys_write+0x145/0x250 fs/read_write.c:738
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0x3b0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f0f2658e929
RSP: 002b:00007f0f2735a038 EFLAGS: 00000246
 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f0f267b5fa0 RCX: 00007f0f2658e929
RDX: 00000000fffffecc RSI: 0000200000000000 RDI: 0000000000000005
RBP: 00007f0f26610b39 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 0000000000000000 R14: 00007f0f267b5fa0 R15: 00007ffc5757a828
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e13f160 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e13f160 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8e13f160 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6770
3 locks held by kworker/u8:2/36:
 #0: ffff88801a481148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801a481148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc90000ac7bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90000ac7bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
 #2: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
2 locks held by kworker/u8:4/65:
2 locks held by kworker/u8:7/1134:
2 locks held by udevd/5211:
1 lock held by dhcpcd/5506:
 #0: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: devinet_ioctl+0x323/0x1b50 net/ipv4/devinet.c:1121
2 locks held by getty/5598:
 #0: ffff88814c6b10a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000333b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
1 lock held by udevd/5839:
2 locks held by kworker/1:3/5915:
 #0: ffff8880b8639f98 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0xad/0x140 kernel/sched/core.c:614
 #1: ffffffff8e13f160 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #1: ffffffff8e13f160 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #1: ffffffff8e13f160 (rcu_read_lock){....}-{1:3}, at: __bpf_trace_run kernel/trace/bpf_trace.c:2257 [inline]
 #1: ffffffff8e13f160 (rcu_read_lock){....}-{1:3}, at: bpf_trace_run4+0x19c/0x4a0 kernel/trace/bpf_trace.c:2301
5 locks held by kworker/u8:13/6525:
 #0: ffff88801b2f6148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88801b2f6148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc9001e917bc0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc9001e917bc0 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
 #2: ffffffff8f510c10 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xf7/0x800 net/core/net_namespace.c:662
 #3: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0xdc/0x890 net/core/dev.c:12630
 #4: ffffffff8e144c78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline]
 #4: ffffffff8e144c78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:998
1 lock held by syz-executor/16177:
 #0: ffffffff8e144c78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline]
 #0: ffffffff8e144c78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:998
1 lock held by syz.4.1593/17955:
 #0: ffff888053a52868 (&pipe->mutex){+.+.}-{4:4}, at: anon_pipe_write+0xbf4/0x1360 fs/pipe.c:572
3 locks held by syz.4.1593/17956:
 #0: ffff888053a52868 (&pipe->mutex){+.+.}-{4:4}, at: splice_to_socket+0xf5/0xf10 fs/splice.c:804
 #1: ffffffff8f5839b0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #2: ffffffff8e41d708 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_set_doit+0x12a/0x1650 fs/nfsd/nfsctl.c:1922
3 locks held by kworker/u8:3/19937:
 #0: ffff88802fd79148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline]
 #0: ffff88802fd79148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3321
 #1: ffffc90011f3fbc0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline]
 #1: ffffc90011f3fbc0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3321
 #2: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4738
2 locks held by syz-executor/21013:
 #0: ffffffff8eca4d60 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8eca4d60 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline]
 #0: ffffffff8eca4d60 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4054
2 locks held by syz.6.1914/21469:
 #0: ffff88806954c6d0 (nlk_cb_mutex-ROUTE){+.+.}-{4:4}
, at: __netlink_dump_start+0xfe/0x7e0 net/netlink/af_netlink.c:2388
 #1: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_dumpit+0x92/0x200 net/core/rtnetlink.c:6812
1 lock held by syz.5.1925/21525:
 #0: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #0: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #0: ffffffff8f51d808 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8db/0x1c70 net/core/rtnetlink.c:4054

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:307 [inline]
 watchdog+0xfee/0x1030 kernel/hung_task.c:470
 kthread+0x711/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 5915 Comm: kworker/1:3 Not tainted 6.16.0-rc5-syzkaller-00038-g733923397fd9 #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 05/07/2025
Workqueue: events legacy_dvb_usb_read_remote_control
RIP: 0010:io_serial_out+0x7c/0xc0 drivers/tty/serial/8250/8250_port.c:416
Code: ae 78 fc 44 89 f9 d3 e5 49 83 c6 40 4c 89 f0 48 c1 e8 03 42 80 3c 20 00 74 08 4c 89 f7 e8 2c 31 dc fc 41 03 2e 89 d8 89 ea ee <5b> 41 5c 41 5e 41 5f 5d e9 02 c9 c8 fb cc 44 89 f9 80 e1 07 38 c1
RSP: 0018:ffffc900030c7330 EFLAGS: 00000002
RAX: 000000000000005b RBX: 000000000000005b RCX: 0000000000000000
RDX: 00000000000003f8 RSI: 0000000000000000 RDI: 0000000000000020
RBP: 00000000000003f8 R08: ffff888024500237 R09: 1ffff110048a0046
R10: dffffc0000000000 R11: ffffffff85477780 R12: dffffc0000000000
R13: ffffffff99af9881 R14: ffffffff99dfe6e0 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff888125d1b000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00005555569cf608 CR3: 0000000037e12000 CR4: 0000000000350ef0
Call Trace:
 <TASK>
 serial_port_out include/linux/serial_core.h:798 [inline]
 serial8250_console_putchar drivers/tty/serial/8250/8250_port.c:3306 [inline]
 serial8250_console_fifo_write drivers/tty/serial/8250/8250_port.c:-1 [inline]
 serial8250_console_write+0x1410/0x1ba0 drivers/tty/serial/8250/8250_port.c:3456
 console_emit_next_record kernel/printk/printk.c:3138 [inline]
 console_flush_all+0x728/0xc40 kernel/printk/printk.c:3226
 __console_flush_and_unlock kernel/printk/printk.c:3285 [inline]
 console_unlock+0xc4/0x270 kernel/printk/printk.c:3325
 vprintk_emit+0x5b7/0x7a0 kernel/printk/printk.c:2450
 _printk+0xcf/0x120 kernel/printk/printk.c:2475
 m920x_read drivers/media/usb/dvb-usb/m920x.c:40 [inline]
 m920x_rc_query+0x2f6/0x830 drivers/media/usb/dvb-usb/m920x.c:-1
 legacy_dvb_usb_read_remote_control+0x100/0x470 drivers/media/usb/dvb-usb/dvb-usb-remote.c:123
 process_one_work kernel/workqueue.c:3238 [inline]
 process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3321
 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3402
 kthread+0x711/0x8a0 kernel/kthread.c:464
 ret_from_fork+0x3fc/0x770 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (13):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/07/10 02:04 upstream 733923397fd9 f4e5e155 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in anon_pipe_write
2025/05/17 04:09 upstream 3c21441eeffc f41472b0 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in anon_pipe_write
2025/05/05 18:56 upstream 92a09c47464d 6ca47dd8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in anon_pipe_write
2025/04/27 22:11 upstream 5bc1018675ec c6b4fb39 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in anon_pipe_write
2025/04/27 17:37 upstream 5bc1018675ec c6b4fb39 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in anon_pipe_write
2025/04/27 12:49 upstream 5bc1018675ec c6b4fb39 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in anon_pipe_write
2025/04/27 08:02 upstream 5bc1018675ec c6b4fb39 .config console log report syz / log C [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in anon_pipe_write
2025/04/27 03:32 upstream 5bc1018675ec c6b4fb39 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in anon_pipe_write
2025/06/13 11:22 linux-next bc6e0ba6c9ba 98683f8f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in anon_pipe_write
2025/06/14 02:34 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 39dfc971e42d 0e8da31f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in anon_pipe_write
2025/06/08 15:06 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci d7fa1af5b33e 4826c28e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in anon_pipe_write
2025/05/22 20:33 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci d7fa1af5b33e 0919b50b .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in anon_pipe_write
2025/05/06 21:23 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci e0f4c8dd9d2d ae98e6b9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in anon_pipe_write
* Struck through repros no longer work on HEAD.