================================================================== BUG: KASAN: slab-use-after-free in __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:132 [inline] BUG: KASAN: slab-use-after-free in _raw_spin_lock_irqsave+0x40/0x60 kernel/locking/spinlock.c:162 Read of size 1 at addr ffff88804fed98d8 by task jfsCommit/111 CPU: 1 UID: 0 PID: 111 Comm: jfsCommit Not tainted syzkaller #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Call Trace: dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 print_address_description mm/kasan/report.c:378 [inline] print_report+0xba/0x230 mm/kasan/report.c:482 kasan_report+0x117/0x150 mm/kasan/report.c:595 __kasan_check_byte+0x2a/0x40 mm/kasan/common.c:574 kasan_check_byte include/linux/kasan.h:402 [inline] lock_acquire+0x79/0x2e0 kernel/locking/lockdep.c:5842 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:132 [inline] _raw_spin_lock_irqsave+0x40/0x60 kernel/locking/spinlock.c:162 __mutex_lock_common kernel/locking/mutex.c:628 [inline] __mutex_lock+0x3cb/0x1300 kernel/locking/mutex.c:776 jfs_syncpt+0x25/0x90 fs/jfs/jfs_logmgr.c:1039 txEnd+0x2e5/0x530 fs/jfs/jfs_txnmgr.c:550 txLazyCommit fs/jfs/jfs_txnmgr.c:2685 [inline] jfs_lazycommit+0x5b8/0xaa0 fs/jfs/jfs_txnmgr.c:2734 kthread+0x388/0x470 kernel/kthread.c:436 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Allocated by task 6929: kasan_save_stack mm/kasan/common.c:57 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:78 poison_kmalloc_redzone mm/kasan/common.c:398 [inline] __kasan_kmalloc+0x93/0xb0 mm/kasan/common.c:415 kasan_kmalloc include/linux/kasan.h:263 [inline] __kmalloc_cache_noprof+0x31c/0x660 mm/slub.c:5380 kmalloc_noprof include/linux/slab.h:950 [inline] kzalloc_noprof include/linux/slab.h:1188 [inline] open_inline_log fs/jfs/jfs_logmgr.c:1159 [inline] lmLogOpen+0x2d1/0xfa0 fs/jfs/jfs_logmgr.c:1069 jfs_mount_rw+0xee/0x670 fs/jfs/jfs_mount.c:257 jfs_fill_super+0x754/0xd80 fs/jfs/super.c:532 get_tree_bdev_flags+0x431/0x4f0 fs/super.c:1694 vfs_get_tree+0x92/0x2a0 fs/super.c:1754 fc_mount fs/namespace.c:1193 [inline] do_new_mount_fc fs/namespace.c:3763 [inline] do_new_mount+0x341/0xd30 fs/namespace.c:3839 do_mount fs/namespace.c:4172 [inline] __do_sys_mount fs/namespace.c:4361 [inline] __se_sys_mount+0x31d/0x420 fs/namespace.c:4338 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f Freed by task 5820: kasan_save_stack mm/kasan/common.c:57 [inline] kasan_save_track+0x3e/0x80 mm/kasan/common.c:78 kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:584 poison_slab_object mm/kasan/common.c:253 [inline] __kasan_slab_free+0x5c/0x80 mm/kasan/common.c:285 kasan_slab_free include/linux/kasan.h:235 [inline] slab_free_hook mm/slub.c:2685 [inline] slab_free mm/slub.c:6165 [inline] kfree+0x1c1/0x630 mm/slub.c:6483 lmLogClose+0x297/0x520 fs/jfs/jfs_logmgr.c:-1 jfs_umount+0x2ef/0x3c0 fs/jfs/jfs_umount.c:114 jfs_put_super+0x8c/0x190 fs/jfs/super.c:194 generic_shutdown_super+0x13d/0x2d0 fs/super.c:646 kill_block_super+0x44/0x90 fs/super.c:1725 deactivate_locked_super+0xbc/0x130 fs/super.c:476 cleanup_mnt+0x437/0x4d0 fs/namespace.c:1312 task_work_run+0x1d9/0x270 kernel/task_work.c:233 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline] __exit_to_user_mode_loop kernel/entry/common.c:67 [inline] exit_to_user_mode_loop+0xed/0x480 kernel/entry/common.c:98 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline] syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline] syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline] do_syscall_64+0x32d/0xf80 arch/x86/entry/syscall_64.c:100 entry_SYSCALL_64_after_hwframe+0x77/0x7f The buggy address belongs to the object at ffff88804fed9800 which belongs to the cache kmalloc-1k of size 1024 The buggy address is located 216 bytes inside of freed 1024-byte region [ffff88804fed9800, ffff88804fed9c00) The buggy address belongs to the physical page: page: refcount:0 mapcount:0 mapping:0000000000000000 index:0xffff88804fede000 pfn:0x4fed8 head: order:3 mapcount:0 entire_mapcount:0 nr_pages_mapped:0 pincount:0 flags: 0xfff00000000240(workingset|head|node=0|zone=1|lastcpupid=0x7ff) page_type: f5(slab) raw: 00fff00000000240 ffff88813fea5dc0 ffffea0000bd7610 ffffea00015bfa10 raw: ffff88804fede000 000000080010000d 00000000f5000000 0000000000000000 head: 00fff00000000240 ffff88813fea5dc0 ffffea0000bd7610 ffffea00015bfa10 head: ffff88804fede000 000000080010000d 00000000f5000000 0000000000000000 head: 00fff00000000003 ffffea00013fb601 00000000ffffffff 00000000ffffffff head: ffffffffffffffff 0000000000000000 00000000ffffffff 0000000000000008 page dumped because: kasan: bad access detected page_owner tracks the page as allocated page last allocated via order 3, migratetype Unmovable, gfp_mask 0xd2820(GFP_ATOMIC|__GFP_NOWARN|__GFP_NORETRY|__GFP_COMP|__GFP_NOMEMALLOC), pid 84, tgid 84 (kworker/u8:5), ts 106173410536, free_ts 102958490027 set_page_owner include/linux/page_owner.h:32 [inline] post_alloc_hook+0x231/0x280 mm/page_alloc.c:1889 prep_new_page mm/page_alloc.c:1897 [inline] get_page_from_freelist+0x24dc/0x2580 mm/page_alloc.c:3962 __alloc_frozen_pages_noprof+0x18d/0x380 mm/page_alloc.c:5250 alloc_slab_page mm/slub.c:3292 [inline] allocate_slab+0x77/0x660 mm/slub.c:3481 new_slab mm/slub.c:3539 [inline] refill_objects+0x331/0x3c0 mm/slub.c:7175 refill_sheaf mm/slub.c:2812 [inline] __pcs_replace_empty_main+0x2e6/0x730 mm/slub.c:4615 alloc_from_pcs mm/slub.c:4717 [inline] slab_alloc_node mm/slub.c:4851 [inline] __do_kmalloc_node mm/slub.c:5259 [inline] __kmalloc_noprof+0x474/0x760 mm/slub.c:5272 kmalloc_noprof include/linux/slab.h:954 [inline] kzalloc_noprof include/linux/slab.h:1188 [inline] ieee802_11_parse_elems_full+0x159/0x2ab0 net/mac80211/parse.c:1051 ieee802_11_parse_elems net/mac80211/ieee80211_i.h:2480 [inline] ieee80211_rx_mgmt_probe_beacon net/mac80211/ibss.c:1571 [inline] ieee80211_ibss_rx_queued_mgmt+0x4ca/0x2cd0 net/mac80211/ibss.c:1602 ieee80211_iface_process_skb net/mac80211/iface.c:1748 [inline] ieee80211_iface_work+0x84e/0x1340 net/mac80211/iface.c:1802 cfg80211_wiphy_work+0x2ab/0x4a0 net/wireless/core.c:440 process_one_work kernel/workqueue.c:3276 [inline] process_scheduled_works+0xb6e/0x18c0 kernel/workqueue.c:3359 worker_thread+0xa53/0xfc0 kernel/workqueue.c:3440 kthread+0x388/0x470 kernel/kthread.c:436 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 page last free pid 6103 tgid 6100 stack trace: reset_page_owner include/linux/page_owner.h:25 [inline] __free_pages_prepare mm/page_alloc.c:1433 [inline] free_unref_folios+0xed5/0x16d0 mm/page_alloc.c:3040 folios_put_refs+0x789/0x8d0 mm/swap.c:1002 folio_batch_release include/linux/pagevec.h:101 [inline] shmem_undo_range+0x52c/0x1660 mm/shmem.c:1149 shmem_truncate_range mm/shmem.c:1277 [inline] shmem_evict_inode+0x240/0x9e0 mm/shmem.c:1407 evict+0x61e/0xb10 fs/inode.c:846 __dentry_kill+0x1a2/0x5e0 fs/dcache.c:670 finish_dput+0xc9/0x480 fs/dcache.c:879 __fput+0x691/0xa70 fs/file_table.c:477 task_work_run+0x1d9/0x270 kernel/task_work.c:233 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline] __exit_to_user_mode_loop kernel/entry/common.c:67 [inline] exit_to_user_mode_loop+0xed/0x480 kernel/entry/common.c:98 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline] syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline] syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline] do_syscall_64+0x32d/0xf80 arch/x86/entry/syscall_64.c:100 entry_SYSCALL_64_after_hwframe+0x77/0x7f Memory state around the buggy address: ffff88804fed9780: fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc fc ffff88804fed9800: fa fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb >ffff88804fed9880: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ^ ffff88804fed9900: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ffff88804fed9980: fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb fb ==================================================================