INFO: task syz.1.830:9390 blocked for more than 143 seconds. Not tainted 6.13.0-rc3-next-20241220-syzkaller #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.1.830 state:D stack:21632 pid:9390 tgid:9389 ppid:5833 flags:0x00004006 Call Trace: context_switch kernel/sched/core.c:5371 [inline] __schedule+0x189f/0x4c80 kernel/sched/core.c:6758 __schedule_loop kernel/sched/core.c:6835 [inline] schedule+0x14b/0x320 kernel/sched/core.c:6850 io_schedule+0x8d/0x110 kernel/sched/core.c:7683 folio_wait_bit_common+0x839/0xee0 mm/filemap.c:1309 folio_wait_locked include/linux/pagemap.h:1247 [inline] gfs2_jhead_process_page+0x16e/0x510 fs/gfs2/lops.c:470 gfs2_find_jhead+0xd68/0xf10 fs/gfs2/lops.c:587 check_journal_clean+0x195/0x360 fs/gfs2/util.c:76 init_journal+0x1881/0x2410 fs/gfs2/ops_fstype.c:806 init_inodes+0xdc/0x320 fs/gfs2/ops_fstype.c:864 gfs2_fill_super+0x1bd1/0x24d0 fs/gfs2/ops_fstype.c:1249 get_tree_bdev_flags+0x48c/0x5c0 fs/super.c:1636 gfs2_get_tree+0x54/0x220 fs/gfs2/ops_fstype.c:1330 vfs_get_tree+0x90/0x2b0 fs/super.c:1814 do_new_mount+0x2be/0xb40 fs/namespace.c:3556 do_mount fs/namespace.c:3896 [inline] __do_sys_mount fs/namespace.c:4107 [inline] __se_sys_mount+0x2d6/0x3c0 fs/namespace.c:4084 do_syscall_x64 arch/x86/entry/common.c:52 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/common.c:83 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fe7009874ca RSP: 002b:00007fe7016dae68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5 RAX: ffffffffffffffda RBX: 00007fe7016daef0 RCX: 00007fe7009874ca RDX: 000000002001f680 RSI: 000000002001f6c0 RDI: 00007fe7016daeb0 RBP: 000000002001f680 R08: 00007fe7016daef0 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 000000002001f6c0 R13: 00007fe7016daeb0 R14: 000000000001f740 R15: 0000000020000200 Showing all locks held in the system: 1 lock held by khungtaskd/30: #0: ffffffff8e937da0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline] #0: ffffffff8e937da0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline] #0: ffffffff8e937da0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x55/0x2a0 kernel/locking/lockdep.c:6744 3 locks held by kworker/u8:2/35: #0: ffff88814d7ec148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88814d7ec148 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3310 #1: ffffc90000ab7c60 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc90000ab7c60 ((work_completion)(&(&ifa->dad_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3310 #2: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_dad_work+0xd0/0x16f0 net/ipv6/addrconf.c:4215 3 locks held by kworker/u8:7/3460: #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3310 #1: ffffc9000c877c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc9000c877c60 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3310 #2: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:281 2 locks held by getty/5595: #0: ffff8880313470a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x6a6/0x1e00 drivers/tty/n_tty.c:2211 4 locks held by kworker/u8:12/8005: #0: ffff88801baf5948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3204 [inline] #0: ffff88801baf5948 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x93b/0x1840 kernel/workqueue.c:3310 #1: ffffc9000c597c60 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3205 [inline] #1: ffffc9000c597c60 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x976/0x1840 kernel/workqueue.c:3310 #2: ffffffff8fcae690 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0x16a/0xd50 net/core/net_namespace.c:602 #3: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: default_device_exit_batch+0xe9/0xaa0 net/core/dev.c:12059 1 lock held by syz.1.830/9390: #0: ffff8880313d80e0 (&type->s_umount_key#115/1){+.+.}-{4:4}, at: alloc_super+0x221/0x9d0 fs/super.c:344 2 locks held by syz-executor/10992: #0: ffffffff901c8628 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline] #0: ffffffff901c8628 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline] #0: ffffffff901c8628 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x22/0x250 net/core/rtnetlink.c:555 #1: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline] #1: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:326 [inline] #1: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xce2/0x2210 net/core/rtnetlink.c:4011 2 locks held by syz-executor/10995: #0: ffffffff901aece0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:337 [inline] #0: ffffffff901aece0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:849 [inline] #0: ffffffff901aece0 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x22/0x250 net/core/rtnetlink.c:555 #1: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline] #1: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:326 [inline] #1: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0xce2/0x2210 net/core/rtnetlink.c:4011 1 lock held by syz.2.1216/11007: #0: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline] #0: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: rtnetlink_rcv_msg+0x6e6/0xcf0 net/core/rtnetlink.c:6908 1 lock held by syz.6.1217/11009: #0: ffffffff8e93d278 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:334 [inline] #0: ffffffff8e93d278 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x451/0x830 kernel/rcu/tree_exp.h:996 1 lock held by syz.4.1219/11016: #0: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:79 [inline] #0: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:128 [inline] #0: ffffffff8fcbab48 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_dellink+0x394/0x8d0 net/core/rtnetlink.c:3510 ============================================= NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Not tainted 6.13.0-rc3-next-20241220-syzkaller #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 09/13/2024 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 nmi_cpu_backtrace+0x49c/0x4d0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:234 [inline] watchdog+0xff6/0x1040 kernel/hung_task.c:397 kthread+0x7a9/0x920 kernel/kthread.c:464 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:148 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:244 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:48 [inline] NMI backtrace for cpu 1 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:106 [inline] NMI backtrace for cpu 1 skipped: idling at acpi_safe_halt+0x21/0x30 drivers/acpi/processor_idle.c:111