syzbot


possible deadlock in get_page_from_freelist (3)

Status: upstream: reported on 2026/01/13 22:48
Reported-by: syzbot+cdf761f82f0238732c3e@syzkaller.appspotmail.com
First crash: 15d, last: 8d06h
Similar bugs (4)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-6.1 possible deadlock in get_page_from_freelist (2) 4 1 203d 203d 0/3 auto-obsoleted due to no activity on 2025/10/17 20:19
linux-6.1 possible deadlock in get_page_from_freelist 4 1 312d 312d 0/3 auto-obsoleted due to no activity on 2025/06/30 21:13
linux-5.15 possible deadlock in get_page_from_freelist 4 1 665d 665d 0/3 auto-obsoleted due to no activity on 2024/07/13 02:54
upstream possible deadlock in get_page_from_freelist bpf 4 18 433d 654d 0/29 auto-obsoleted due to no activity on 2025/03/01 07:28

Sample crash report:
======================================================
WARNING: possible circular locking dependency detected
syzkaller #0 Not tainted
------------------------------------------------------
syz.9.1395/9327 is trying to acquire lock:
ffff88813fffacd8 (&zone->lock){-.-.}-{2:2}, at: rmqueue_buddy mm/page_alloc.c:3724 [inline]
ffff88813fffacd8 (&zone->lock){-.-.}-{2:2}, at: rmqueue mm/page_alloc.c:3877 [inline]
ffff88813fffacd8 (&zone->lock){-.-.}-{2:2}, at: get_page_from_freelist+0x90d/0x1ab0 mm/page_alloc.c:4325

but task is already holding lock:
ffff88805744c238 (&trie->lock){-.-.}-{2:2}, at: trie_update_elem+0xc7/0xe80 kernel/bpf/lpm_trie.c:335

which lock already depends on the new lock.


the existing dependency chain (in reverse order) is:

-> #1 (&trie->lock){-.-.}-{2:2}:
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
       _raw_spin_lock_irqsave+0xb0/0x100 kernel/locking/spinlock.c:162
       trie_delete_elem+0x90/0x690 kernel/bpf/lpm_trie.c:467
       0xffffffffa0000a36
       bpf_dispatcher_nop_func include/linux/bpf.h:1012 [inline]
       __bpf_prog_run include/linux/filter.h:607 [inline]
       bpf_prog_run include/linux/filter.h:614 [inline]
       __bpf_trace_run kernel/trace/bpf_trace.c:2285 [inline]
       bpf_trace_run2+0x1d5/0x3e0 kernel/trace/bpf_trace.c:2324
       trace_contention_end+0x13f/0x190 include/trace/events/lock.h:122
       __pv_queued_spin_lock_slowpath+0x7e8/0x9c0 kernel/locking/qspinlock.c:560
       pv_queued_spin_lock_slowpath arch/x86/include/asm/paravirt.h:591 [inline]
       queued_spin_lock_slowpath+0x43/0x50 arch/x86/include/asm/qspinlock.h:51
       queued_spin_lock include/asm-generic/qspinlock.h:114 [inline]
       do_raw_spin_lock+0x265/0x2f0 kernel/locking/spinlock_debug.c:115
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:111 [inline]
       _raw_spin_lock_irqsave+0xbc/0x100 kernel/locking/spinlock.c:162
       free_pcppages_bulk+0x61/0x690 mm/page_alloc.c:1566
       free_unref_page+0x18d/0x3f0 mm/page_alloc.c:3502
       qlink_free mm/kasan/quarantine.c:168 [inline]
       qlist_free_all+0x76/0xe0 mm/kasan/quarantine.c:187
       kasan_quarantine_reduce+0x144/0x160 mm/kasan/quarantine.c:294
       __kasan_slab_alloc+0x1e/0x80 mm/kasan/common.c:306
       kasan_slab_alloc include/linux/kasan.h:201 [inline]
       slab_post_alloc_hook+0x4b/0x480 mm/slab.h:737
       slab_alloc_node mm/slub.c:3359 [inline]
       slab_alloc mm/slub.c:3367 [inline]
       __kmem_cache_alloc_lru mm/slub.c:3374 [inline]
       kmem_cache_alloc+0x123/0x2f0 mm/slub.c:3383
       vm_area_alloc+0x20/0xe0 kernel/fork.c:459
       __mmap_region mm/mmap.c:2753 [inline]
       mmap_region+0xc18/0x1ca0 mm/mmap.c:2916
       do_mmap+0x964/0xfd0 mm/mmap.c:1436
       vm_mmap_pgoff+0x1c1/0x2d0 mm/util.c:520
       ksys_mmap_pgoff+0x516/0x6f0 mm/mmap.c:1482
       do_syscall_x64 arch/x86/entry/common.c:46 [inline]
       do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:76
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

-> #0 (&zone->lock){-.-.}-{2:2}:
       check_prev_add kernel/locking/lockdep.c:3090 [inline]
       check_prevs_add kernel/locking/lockdep.c:3209 [inline]
       validate_chain kernel/locking/lockdep.c:3825 [inline]
       __lock_acquire+0x2d07/0x7d10 kernel/locking/lockdep.c:5049
       lock_acquire+0x1bb/0x4a0 kernel/locking/lockdep.c:5662
       __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
       _raw_spin_lock_irqsave+0xb0/0x100 kernel/locking/spinlock.c:162
       rmqueue_buddy mm/page_alloc.c:3724 [inline]
       rmqueue mm/page_alloc.c:3877 [inline]
       get_page_from_freelist+0x90d/0x1ab0 mm/page_alloc.c:4325
       __alloc_pages+0x1ec/0x4f0 mm/page_alloc.c:5614
       __alloc_pages_node include/linux/gfp.h:237 [inline]
       alloc_pages_node include/linux/gfp.h:260 [inline]
       __kmalloc_large_node+0x8c/0x1e0 mm/slab_common.c:1077
       __do_kmalloc_node mm/slab_common.c:924 [inline]
       __kmalloc_node+0x10e/0x240 mm/slab_common.c:943
       kmalloc_node include/linux/slab.h:589 [inline]
       bpf_map_kmalloc_node+0xb8/0x1a0 kernel/bpf/syscall.c:454
       lpm_trie_node_alloc kernel/bpf/lpm_trie.c:291 [inline]
       trie_update_elem+0x163/0xe80 kernel/bpf/lpm_trie.c:338
       bpf_map_update_value+0x59e/0x670 kernel/bpf/syscall.c:228
       map_update_elem+0x4d7/0x680 kernel/bpf/syscall.c:1473
       __sys_bpf+0x4ec/0x780 kernel/bpf/syscall.c:5018
       __do_sys_bpf kernel/bpf/syscall.c:5134 [inline]
       __se_sys_bpf kernel/bpf/syscall.c:5132 [inline]
       __x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:5132
       do_syscall_x64 arch/x86/entry/common.c:46 [inline]
       do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:76
       entry_SYSCALL_64_after_hwframe+0x68/0xd2

other info that might help us debug this:

 Possible unsafe locking scenario:

       CPU0                    CPU1
       ----                    ----
  lock(&trie->lock);
                               lock(&zone->lock);
                               lock(&trie->lock);
  lock(&zone->lock);

 *** DEADLOCK ***

2 locks held by syz.9.1395/9327:
 #0: ffffffff8cb2b2a0 (rcu_read_lock){....}-{1:2}, at: rcu_lock_acquire include/linux/rcupdate.h:350 [inline]
 #0: ffffffff8cb2b2a0 (rcu_read_lock){....}-{1:2}, at: rcu_read_lock include/linux/rcupdate.h:791 [inline]
 #0: ffffffff8cb2b2a0 (rcu_read_lock){....}-{1:2}, at: bpf_map_update_value+0x375/0x670 kernel/bpf/syscall.c:227
 #1: ffff88805744c238 (&trie->lock){-.-.}-{2:2}, at: trie_update_elem+0xc7/0xe80 kernel/bpf/lpm_trie.c:335

stack backtrace:
CPU: 0 PID: 9327 Comm: syz.9.1395 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x188/0x24e lib/dump_stack.c:106
 check_noncircular+0x296/0x330 kernel/locking/lockdep.c:2170
 check_prev_add kernel/locking/lockdep.c:3090 [inline]
 check_prevs_add kernel/locking/lockdep.c:3209 [inline]
 validate_chain kernel/locking/lockdep.c:3825 [inline]
 __lock_acquire+0x2d07/0x7d10 kernel/locking/lockdep.c:5049
 lock_acquire+0x1bb/0x4a0 kernel/locking/lockdep.c:5662
 __raw_spin_lock_irqsave include/linux/spinlock_api_smp.h:110 [inline]
 _raw_spin_lock_irqsave+0xb0/0x100 kernel/locking/spinlock.c:162
 rmqueue_buddy mm/page_alloc.c:3724 [inline]
 rmqueue mm/page_alloc.c:3877 [inline]
 get_page_from_freelist+0x90d/0x1ab0 mm/page_alloc.c:4325
 __alloc_pages+0x1ec/0x4f0 mm/page_alloc.c:5614
 __alloc_pages_node include/linux/gfp.h:237 [inline]
 alloc_pages_node include/linux/gfp.h:260 [inline]
 __kmalloc_large_node+0x8c/0x1e0 mm/slab_common.c:1077
 __do_kmalloc_node mm/slab_common.c:924 [inline]
 __kmalloc_node+0x10e/0x240 mm/slab_common.c:943
 kmalloc_node include/linux/slab.h:589 [inline]
 bpf_map_kmalloc_node+0xb8/0x1a0 kernel/bpf/syscall.c:454
 lpm_trie_node_alloc kernel/bpf/lpm_trie.c:291 [inline]
 trie_update_elem+0x163/0xe80 kernel/bpf/lpm_trie.c:338
 bpf_map_update_value+0x59e/0x670 kernel/bpf/syscall.c:228
 map_update_elem+0x4d7/0x680 kernel/bpf/syscall.c:1473
 __sys_bpf+0x4ec/0x780 kernel/bpf/syscall.c:5018
 __do_sys_bpf kernel/bpf/syscall.c:5134 [inline]
 __se_sys_bpf kernel/bpf/syscall.c:5132 [inline]
 __x64_sys_bpf+0x78/0x90 kernel/bpf/syscall.c:5132
 do_syscall_x64 arch/x86/entry/common.c:46 [inline]
 do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:76
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7f99a119acb9
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007f99a20fe028 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007f99a1415fa0 RCX: 00007f99a119acb9
RDX: 0000000000000020 RSI: 00002000000001c0 RDI: 0000000000000002
RBP: 00007f99a1208bf7 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f99a1416038 R14: 00007f99a1415fa0 R15: 00007ffc686a4b08
 </TASK>

Crashes (2):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/01/20 23:25 linux-6.1.y cd9b81672742 06648d9c .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in get_page_from_freelist
2026/01/13 22:47 linux-6.1.y bec0e10ee67e d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-1-kasan-perf possible deadlock in get_page_from_freelist
* Struck through repros no longer work on HEAD.