================================================================== BUG: KASAN: use-after-free in instrument_atomic_read_write include/linux/instrumented.h:101 [inline] BUG: KASAN: use-after-free in atomic64_try_cmpxchg_acquire include/asm-generic/atomic-instrumented.h:1515 [inline] BUG: KASAN: use-after-free in atomic_long_try_cmpxchg_acquire include/asm-generic/atomic-long.h:443 [inline] BUG: KASAN: use-after-free in __down_read_trylock kernel/locking/rwsem.c:1393 [inline] BUG: KASAN: use-after-free in down_read_trylock+0xba/0x1d0 kernel/locking/rwsem.c:1553 Write of size 8 at addr ffff888110282070 by task kworker/u4:5/20001 CPU: 0 PID: 20001 Comm: kworker/u4:5 Not tainted 5.10.209-syzkaller-00435-gdd976ecce2ce #0 Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/25/2024 Workqueue: writeback wb_workfn (flush-7:4) Call Trace: __dump_stack lib/dump_stack.c:77 [inline] dump_stack_lvl+0x1e2/0x24b lib/dump_stack.c:118 print_address_description+0x81/0x3b0 mm/kasan/report.c:248 __kasan_report mm/kasan/report.c:435 [inline] kasan_report+0x179/0x1c0 mm/kasan/report.c:452 kasan_check_range+0x293/0x2a0 mm/kasan/generic.c:189 __kasan_check_write+0x14/0x20 mm/kasan/shadow.c:37 instrument_atomic_read_write include/linux/instrumented.h:101 [inline] atomic64_try_cmpxchg_acquire include/asm-generic/atomic-instrumented.h:1515 [inline] atomic_long_try_cmpxchg_acquire include/asm-generic/atomic-long.h:443 [inline] __down_read_trylock kernel/locking/rwsem.c:1393 [inline] down_read_trylock+0xba/0x1d0 kernel/locking/rwsem.c:1553 trylock_super+0x1f/0xf0 fs/super.c:418 __writeback_inodes_wb fs/fs-writeback.c:1795 [inline] wb_writeback+0x49d/0xc60 fs/fs-writeback.c:1910 wb_check_old_data_flush fs/fs-writeback.c:2012 [inline] wb_do_writeback fs/fs-writeback.c:2065 [inline] wb_workfn+0xb3d/0x1110 fs/fs-writeback.c:2094 process_one_work+0x6dc/0xbd0 kernel/workqueue.c:2301 worker_thread+0xaea/0x1510 kernel/workqueue.c:2447 kthread+0x34b/0x3d0 kernel/kthread.c:313 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:299 Allocated by task 12028: kasan_save_stack mm/kasan/common.c:38 [inline] kasan_set_track mm/kasan/common.c:45 [inline] set_alloc_info mm/kasan/common.c:430 [inline] ____kasan_kmalloc+0xdb/0x110 mm/kasan/common.c:509 __kasan_kmalloc+0x9/0x10 mm/kasan/common.c:518 kasan_kmalloc include/linux/kasan.h:254 [inline] __kmalloc+0x1aa/0x330 mm/slub.c:4033 kmalloc include/linux/slab.h:557 [inline] sk_prot_alloc+0xbe/0x370 net/core/sock.c:1704 sk_alloc+0x38/0x4d0 net/core/sock.c:1762 __netlink_create net/netlink/af_netlink.c:639 [inline] netlink_create+0x3e9/0x640 net/netlink/af_netlink.c:702 __sock_create+0x3a6/0x760 net/socket.c:1436 sock_create net/socket.c:1487 [inline] __sys_socket+0x132/0x370 net/socket.c:1529 __do_sys_socket net/socket.c:1538 [inline] __se_sys_socket net/socket.c:1536 [inline] __x64_sys_socket+0x7a/0x90 net/socket.c:1536 do_syscall_64+0x34/0x70 entry_SYSCALL_64_after_hwframe+0x61/0xc6 Last potentially related work creation: kasan_save_stack+0x3b/0x60 mm/kasan/common.c:38 __kasan_record_aux_stack+0xd3/0x100 mm/kasan/generic.c:348 kasan_record_aux_stack_noalloc+0xb/0x10 mm/kasan/generic.c:358 __call_rcu kernel/rcu/tree.c:2976 [inline] call_rcu+0x135/0x11f0 kernel/rcu/tree.c:3050 netlink_release+0x12df/0x16f0 net/netlink/af_netlink.c:811 __sock_release net/socket.c:597 [inline] sock_close+0xdf/0x270 net/socket.c:1286 __fput+0x309/0x760 fs/file_table.c:281 ____fput+0x15/0x20 fs/file_table.c:314 task_work_run+0x129/0x190 kernel/task_work.c:164 tracehook_notify_resume include/linux/tracehook.h:188 [inline] exit_to_user_mode_loop+0xbf/0xd0 kernel/entry/common.c:172 exit_to_user_mode_prepare kernel/entry/common.c:199 [inline] syscall_exit_to_user_mode+0xc5/0x1d0 kernel/entry/common.c:274 do_syscall_64+0x40/0x70 arch/x86/entry/common.c:56 entry_SYSCALL_64_after_hwframe+0x61/0xc6 Second to last potentially related work creation: kasan_save_stack+0x3b/0x60 mm/kasan/common.c:38 __kasan_record_aux_stack+0xd3/0x100 mm/kasan/generic.c:348 kasan_record_aux_stack_noalloc+0xb/0x10 mm/kasan/generic.c:358 insert_work+0x56/0x310 kernel/workqueue.c:1352 __queue_work+0x970/0xd10 kernel/workqueue.c:1518 queue_work_on+0x105/0x160 kernel/workqueue.c:1545 queue_work include/linux/workqueue.h:515 [inline] schedule_work include/linux/workqueue.h:576 [inline] destroy_super_rcu+0xd1/0xe0 fs/super.c:172 rcu_do_batch+0x597/0xc40 kernel/rcu/tree.c:2494 rcu_core+0x5ad/0xe40 kernel/rcu/tree.c:2735 rcu_core_si+0x9/0x10 kernel/rcu/tree.c:2748 __do_softirq+0x268/0x5bb kernel/softirq.c:309 The buggy address belongs to the object at ffff888110282000 which belongs to the cache kmalloc-2k of size 2048 The buggy address is located 112 bytes inside of 2048-byte region [ffff888110282000, ffff888110282800) The buggy address belongs to the page: page:ffffea000440a000 refcount:1 mapcount:0 mapping:0000000000000000 index:0xffff888110285000 pfn:0x110280 head:ffffea000440a000 order:3 compound_mapcount:0 compound_pincount:0 flags: 0x4000000000010200(slab|head) raw: 4000000000010200 ffffea0005753a08 ffffea000476ec08 ffff888100042d80 raw: ffff888110285000 0000000000080003 00000001ffffffff 0000000000000000 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Unmovable, gfp_mask 0x1d20c0(__GFP_IO|__GFP_FS|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC|__GFP_HARDWALL), pid 6862, ts 292588744536, free_ts 292537390866 set_page_owner include/linux/page_owner.h:35 [inline] post_alloc_hook mm/page_alloc.c:2456 [inline] prep_new_page+0x166/0x180 mm/page_alloc.c:2462 get_page_from_freelist+0x2d8c/0x2f30 mm/page_alloc.c:4254 __alloc_pages_nodemask+0x435/0xaf0 mm/page_alloc.c:5346 allocate_slab mm/slub.c:1808 [inline] new_slab+0x80/0x400 mm/slub.c:1869 new_slab_objects mm/slub.c:2627 [inline] ___slab_alloc+0x302/0x4b0 mm/slub.c:2791 __slab_alloc+0x63/0xa0 mm/slub.c:2831 slab_alloc_node mm/slub.c:2913 [inline] slab_alloc mm/slub.c:2955 [inline] kmem_cache_alloc_trace+0x1bd/0x2e0 mm/slub.c:2972 kmalloc include/linux/slab.h:552 [inline] kzalloc include/linux/slab.h:664 [inline] copy_verifier_state+0x58b/0xbf0 kernel/bpf/verifier.c:913 push_stack+0x19d/0x4f0 kernel/bpf/verifier.c:989 check_cond_jmp_op kernel/bpf/verifier.c:8269 [inline] do_check+0xc645/0xe8c0 kernel/bpf/verifier.c:10292 do_check_common+0x946/0x1370 kernel/bpf/verifier.c:12026 do_check_main kernel/bpf/verifier.c:12089 [inline] bpf_check+0xb5b8/0xf2b0 kernel/bpf/verifier.c:12644 bpf_prog_load kernel/bpf/syscall.c:2233 [inline] __do_sys_bpf kernel/bpf/syscall.c:4429 [inline] __se_sys_bpf+0x107a2/0x11cb0 kernel/bpf/syscall.c:4385 __x64_sys_bpf+0x7b/0x90 kernel/bpf/syscall.c:4385 do_syscall_64+0x34/0x70 entry_SYSCALL_64_after_hwframe+0x61/0xc6 page last free stack trace: reset_page_owner include/linux/page_owner.h:28 [inline] free_pages_prepare mm/page_alloc.c:1349 [inline] __free_pages_ok+0x82c/0x850 mm/page_alloc.c:1629 free_the_page+0x76/0x370 mm/page_alloc.c:5407 __free_pages+0x67/0xc0 mm/page_alloc.c:5416 __free_slab+0xcf/0x190 mm/slub.c:1894 free_slab mm/slub.c:1909 [inline] discard_slab mm/slub.c:1915 [inline] unfreeze_partials+0x15e/0x190 mm/slub.c:2410 put_cpu_partial+0xbf/0x180 mm/slub.c:2446 __slab_free+0x2c8/0x3a0 mm/slub.c:3095 do_slab_free mm/slub.c:3191 [inline] ___cache_free+0x111/0x130 mm/slub.c:3210 qlink_free+0x50/0x90 mm/kasan/quarantine.c:157 qlist_free_all+0x47/0xb0 mm/kasan/quarantine.c:176 kasan_quarantine_reduce+0x15a/0x170 mm/kasan/quarantine.c:283 __kasan_slab_alloc+0x2f/0xe0 mm/kasan/common.c:440 kasan_slab_alloc include/linux/kasan.h:244 [inline] slab_post_alloc_hook+0x61/0x2f0 mm/slab.h:583 slab_alloc_node mm/slub.c:2947 [inline] slab_alloc mm/slub.c:2955 [inline] kmem_cache_alloc+0x168/0x2e0 mm/slub.c:2960 vm_area_dup+0x26/0x270 kernel/fork.c:366 __split_vma+0xb4/0x3f0 mm/mmap.c:2868 Memory state around the buggy address: ffff888110281f00: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff888110281f80: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc >ffff888110282000: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff888110282080: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff888110282100: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================