syzbot


INFO: rcu detected stall in br_handle_frame

Status: upstream: reported on 2025/11/21 04:49
Reported-by: syzbot+a0b26fcabafa4609b88c@syzkaller.appspotmail.com
First crash: 69d, last: 2d02h
Similar bugs (15)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: rcu detected stall in br_handle_frame (5) bridge 1 syz 24 446d 474d 28/29 fixed on 2024/11/12 23:31
linux-4.14 INFO: rcu detected stall in br_handle_frame (3) 1 1 1947d 1947d 0/1 auto-closed as invalid on 2021/01/28 07:46
upstream INFO: rcu detected stall in br_handle_frame 1 C done 341 2327d 2333d 13/29 fixed on 2019/10/09 10:54
upstream INFO: rcu detected stall in br_handle_frame (2) net 1 C done 2 2232d 2228d 15/29 fixed on 2020/02/18 14:31
upstream INFO: rcu detected stall in br_handle_frame (3) bridge 1 1 1657d 1657d 0/29 auto-closed as invalid on 2021/10/15 13:41
linux-4.14 INFO: rcu detected stall in br_handle_frame (2) 1 C done 1 2232d 2232d 1/1 fixed on 2020/01/19 15:05
linux-4.14 INFO: rcu detected stall in br_handle_frame 1 C done 15 2325d 2336d 1/1 fixed on 2019/12/07 19:24
linux-4.19 INFO: rcu detected stall in br_handle_frame (2) 1 C error 31 1092d 1933d 0/1 upstream: reported C repro on 2020/10/14 18:56
linux-4.19 INFO: rcu detected stall in br_handle_frame 1 C done 41 2324d 2337d 1/1 fixed on 2019/12/07 19:18
linux-6.1 INFO: rcu detected stall in br_handle_frame (2) 1 4 1d03h 115d 0/3 upstream: reported on 2025/10/06 18:18
linux-5.15 INFO: rcu detected stall in br_handle_frame origin:lts-only 1 C inconclusive 3 70d 721d 0/3 upstream: reported C repro on 2024/02/08 13:52
upstream INFO: rcu detected stall in br_handle_frame (6) bridge 1 C error 52 11d 16d 0/29 upstream: reported C repro on 2026/01/13 18:06
linux-6.1 INFO: rcu detected stall in br_handle_frame 1 2 520d 603d 0/3 auto-obsoleted due to no activity on 2024/12/04 21:21
upstream INFO: rcu detected stall in br_handle_frame (4) kernel 1 1 1496d 1496d 0/29 closed as invalid on 2022/02/08 10:10
android-5-15 BUG: soft lockup in br_handle_frame 1 2 529d 535d 0/2 auto-obsoleted due to no activity on 2024/11/16 05:31

Sample crash report:
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P7287/1:b..l
rcu: 	(detected by 0, t=10502 jiffies, g=28169, q=226 ncpus=2)
task:syz.2.455       state:R  running task     stack:21928 pid:7287  ppid:5769   flags:0x00004006
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0x1553/0x45a0 kernel/sched/core.c:6700
 preempt_schedule_notrace+0xdd/0x110 kernel/sched/core.c:6960
 preempt_schedule_notrace_thunk+0x1a/0x30 arch/x86/entry/thunk_64.S:46
 trace_lock_acquire include/trace/events/lock.h:24 [inline]
 lock_acquire+0x405/0x420 kernel/locking/lockdep.c:5725
 rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
 rcu_read_lock include/linux/rcupdate.h:786 [inline]
 __pte_offset_map+0x48/0x2c0 mm/pgtable-generic.c:287
 __pte_offset_map_lock+0x47/0x1d0 mm/pgtable-generic.c:371
 pte_offset_map_lock include/linux/mm.h:3016 [inline]
 __get_locked_pte mm/memory.c:1823 [inline]
 get_locked_pte include/linux/mm.h:2741 [inline]
 insert_page mm/memory.c:1865 [inline]
 vm_insert_page+0x558/0x8c0 mm/memory.c:2021
 kcov_mmap+0xe9/0x160 kernel/kcov.c:505
 call_mmap include/linux/fs.h:2023 [inline]
 mmap_file mm/internal.h:98 [inline]
 __mmap_region mm/mmap.c:2790 [inline]
 mmap_region+0xf8e/0x2000 mm/mmap.c:2941
 do_mmap+0x92c/0x10a0 mm/mmap.c:1385
 vm_mmap_pgoff+0x1c4/0x3f0 mm/util.c:556
 ksys_mmap_pgoff+0x520/0x700 mm/mmap.c:1431
 do_syscall_x64 arch/x86/entry/common.c:46 [inline]
 do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fd97559ac22
RSP: 002b:00007fff05a2a258 EFLAGS: 00000206 ORIG_RAX: 0000000000000009
RAX: ffffffffffffffda RBX: 00007fd9733f6000 RCX: 00007fd97559ac22
RDX: 0000000000000003 RSI: 0000000000400000 RDI: 00007fd9733f6000
RBP: 0000000000000011 R08: 00000000000000db R09: 0000000000000000
R10: 0000000000000011 R11: 0000000000000206 R12: 0000000000000003
R13: 0000000000000003 R14: 0000000000000000 R15: 00007fd975815fa0
 </TASK>
rcu: rcu_preempt kthread starved for 1465 jiffies! g28169 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:26920 pid:17    ppid:2      flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5381 [inline]
 __schedule+0x1553/0x45a0 kernel/sched/core.c:6700
 schedule+0xbd/0x170 kernel/sched/core.c:6774
 schedule_timeout+0x188/0x2d0 kernel/time/timer.c:2168
 rcu_gp_fqs_loop+0x313/0x1590 kernel/rcu/tree.c:1667
 rcu_gp_kthread+0x9d/0x3b0 kernel/rcu/tree.c:1866
 kthread+0x2fa/0x390 kernel/kthread.c:388
 ret_from_fork+0x48/0x80 arch/x86/kernel/process.c:152
 ret_from_fork_asm+0x11/0x20 arch/x86/entry/entry_64.S:293
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 PID: 7291 Comm: syz.2.455 Not tainted syzkaller #0
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:__this_cpu_preempt_check+0x0/0x20 lib/smp_processor_id.c:65
Code: 48 89 da e8 12 6a f4 ff 48 8b 74 24 30 48 c7 c7 60 81 1c 8b e8 01 6a f4 ff e8 4c 80 ff ff eb a8 e8 e5 e8 ff ff 0f 1f 44 00 00 <f3> 0f 1e fa 48 89 fe 48 c7 c7 e0 80 1c 8b e9 dd fe ff ff cc cc cc
RSP: 0018:ffffc900001ef2d8 EFLAGS: 00000006
RAX: ffffffff8190ff60 RBX: 0000000000000001 RCX: ffff88802b2f8000
RDX: 0000000000010100 RSI: 0000000000000001 RDI: ffffffff8acfe8c0
RBP: ffffc900001ef3d8 R08: 0000000000000001 R09: ffff8880b8f36930
R10: ffffe8ffffd9a02c R11: fffff91ffffb3407 R12: ffffffff8cffd180
R13: 1ffff9200003de68 R14: ffff8880b8f36930 R15: 0000000000000000
FS:  00007fd9764136c0(0000) GS:ffff8880b8f00000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007feee69abb40 CR3: 0000000025e1d000 CR4: 00000000003506e0
Call Trace:
 <IRQ>
 trace_call_bpf+0x5e9/0x6c0 kernel/trace/bpf_trace.c:148
 perf_trace_run_bpf_submit+0x7a/0x1c0 kernel/events/core.c:10263
 perf_trace_lock_acquire+0x34f/0x410 include/trace/events/lock.h:24
 trace_lock_acquire include/trace/events/lock.h:24 [inline]
 lock_acquire+0x3ef/0x420 kernel/locking/lockdep.c:5725
 seqcount_lockdep_reader_access+0xd1/0x1d0 include/linux/seqlock.h:102
 timekeeping_get_delta kernel/time/timekeeping.c:254 [inline]
 timekeeping_get_ns kernel/time/timekeeping.c:388 [inline]
 ktime_get+0x7f/0x280 kernel/time/timekeeping.c:848
 hrtimer_forward_now include/linux/hrtimer.h:509 [inline]
 perf_swevent_hrtimer+0x2c9/0x570 kernel/events/core.c:11193
 __run_hrtimer kernel/time/hrtimer.c:1750 [inline]
 __hrtimer_run_queues+0x4eb/0xc40 kernel/time/hrtimer.c:1814
 hrtimer_interrupt+0x3c9/0x9c0 kernel/time/hrtimer.c:1876
 local_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1077 [inline]
 __sysvec_apic_timer_interrupt+0xfb/0x3b0 arch/x86/kernel/apic/apic.c:1094
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
 sysvec_apic_timer_interrupt+0x51/0xc0 arch/x86/kernel/apic/apic.c:1088
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:__preempt_count_add arch/x86/include/asm/preempt.h:80 [inline]
RIP: 0010:rcu_is_watching+0x6/0xb0 kernel/rcu/tree.c:699
Code: 8a e8 be 2e 13 09 48 c7 c7 40 88 13 8d 4c 89 f6 e8 3f 9e eb 02 e9 44 ff ff ff 66 2e 0f 1f 84 00 00 00 00 00 f3 0f 1e fa 41 57 <41> 56 53 65 ff 05 98 d9 92 7e e8 7b 44 13 09 89 c3 83 f8 08 73 60
RSP: 0018:ffffc900001efd50 EFLAGS: 00000257
RAX: 0000000000000001 RBX: 0000000000000000 RCX: ffffffff81682307
RDX: 0000000000000000 RSI: 0000000000000008 RDI: ffffffff8e8ad9e8
RBP: ffffc900001efe68 R08: ffffffff8e8ad9ef R09: 1ffffffff1d15b3d
R10: dffffc0000000000 R11: fffffbfff1d15b3e R12: 1ffff9200003dfb8
R13: ffffffff8d131fe0 R14: 0000000000000001 R15: dffffc0000000000
 trace_lock_acquire include/trace/events/lock.h:24 [inline]
 lock_acquire+0xc6/0x420 kernel/locking/lockdep.c:5725
 rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
 rcu_read_lock include/linux/rcupdate.h:786 [inline]
 trace_call_bpf+0xe4/0x6c0 kernel/trace/bpf_trace.c:142
 perf_trace_run_bpf_submit+0x7a/0x1c0 kernel/events/core.c:10263
 perf_trace_lock_acquire+0x34f/0x410 include/trace/events/lock.h:24
 trace_lock_acquire include/trace/events/lock.h:24 [inline]
 lock_acquire+0x3ef/0x420 kernel/locking/lockdep.c:5725
 rcu_lock_acquire include/linux/rcupdate.h:334 [inline]
 rcu_read_lock include/linux/rcupdate.h:786 [inline]
 nf_hook include/linux/netfilter.h:228 [inline]
 NF_HOOK+0x111/0x3a0 include/linux/netfilter.h:302
 br_handle_frame_finish+0x14b9/0x19b0 net/bridge/br_input.c:221
 br_nf_hook_thresh+0x3cd/0x4a0 net/bridge/br_netfilter_hooks.c:1184
 br_nf_pre_routing_finish_ipv6+0x9dc/0xd00 net/bridge/br_netfilter_ipv6.c:-1
 NF_HOOK include/linux/netfilter.h:304 [inline]
 br_nf_pre_routing_ipv6+0x349/0x6b0 net/bridge/br_netfilter_ipv6.c:184
 nf_hook_entry_hookfn include/linux/netfilter.h:144 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:277 [inline]
 br_handle_frame+0x96b/0x14e0 net/bridge/br_input.c:424
 __netif_receive_skb_core+0xfab/0x3af0 net/core/dev.c:5532
 __netif_receive_skb_one_core net/core/dev.c:5636 [inline]
 __netif_receive_skb+0x74/0x290 net/core/dev.c:5752
 process_backlog+0x396/0x700 net/core/dev.c:6080
 __napi_poll+0xc0/0x460 net/core/dev.c:6642
 napi_poll net/core/dev.c:6709 [inline]
 net_rx_action+0x616/0xc50 net/core/dev.c:6846
 handle_softirqs+0x280/0x820 kernel/softirq.c:578
 __do_softirq kernel/softirq.c:612 [inline]
 invoke_softirq kernel/softirq.c:452 [inline]
 __irq_exit_rcu+0xd3/0x190 kernel/softirq.c:661
 irq_exit_rcu+0x9/0x20 kernel/softirq.c:673
 instr_sysvec_apic_timer_interrupt arch/x86/kernel/apic/apic.c:1088 [inline]
 sysvec_apic_timer_interrupt+0xa4/0xc0 arch/x86/kernel/apic/apic.c:1088
 </IRQ>
 <TASK>
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:687
RIP: 0010:in_lock_functions+0x4/0x20 kernel/locking/spinlock.c:412
Code: 03 38 c1 0f 8c 77 ff ff ff 48 c7 c7 80 ea 1f 93 e8 71 d6 75 00 e9 66 ff ff ff cc cc cc cc cc cc cc cc cc cc cc cc f3 0f 1e fa <48> 81 ff a0 b0 8d 8a 0f 93 c0 48 81 ff 0c dc 8d 8a 0f 92 c1 20 c1
RSP: 0018:ffffc90003367a28 EFLAGS: 00000246
RAX: 0000000000000001 RBX: ffffffff8a8db1a2 RCX: ffffffff972c5403
RDX: ffffc9000d94d000 RSI: 000000000004e55b RDI: ffffffff8a8db1a2
RBP: ffffc90003367b50 R08: ffff88807702ae6f R09: 0000000000000000
R10: ffff88807702ae60 R11: ffffed100ee055ce R12: 1ffff9200066cf54
R13: ffff88807702ad38 R14: ffff88807702ad38 R15: dffffc0000000000
 get_lock_parent_ip include/linux/ftrace.h:974 [inline]
 preempt_latency_start kernel/sched/core.c:5827 [inline]
 preempt_count_add+0x91/0x1a0 kernel/sched/core.c:5852
 __raw_spin_lock include/linux/spinlock_api_smp.h:132 [inline]
 _raw_spin_lock+0x12/0x40 kernel/locking/spinlock.c:154
 spin_lock include/linux/spinlock.h:351 [inline]
 d_instantiate+0x64/0x90 fs/dcache.c:2043
 alloc_file_pseudo+0x173/0x210 fs/file_table.c:335
 __anon_inode_getfile fs/anon_inodes.c:122 [inline]
 __anon_inode_getfd fs/anon_inodes.c:207 [inline]
 anon_inode_getfd+0xca/0x1c0 fs/anon_inodes.c:242
 bpf_prog_load+0x1296/0x1670 kernel/bpf/syscall.c:2783
 __sys_bpf+0x5ba/0x890 kernel/bpf/syscall.c:5476
 __do_sys_bpf kernel/bpf/syscall.c:5580 [inline]
 __se_sys_bpf kernel/bpf/syscall.c:5578 [inline]
 __x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:5578
 do_syscall_x64 arch/x86/entry/common.c:46 [inline]
 do_syscall_64+0x55/0xa0 arch/x86/entry/common.c:76
 entry_SYSCALL_64_after_hwframe+0x68/0xd2
RIP: 0033:0x7fd97559aeb9
Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48
RSP: 002b:00007fd976413028 EFLAGS: 00000246 ORIG_RAX: 0000000000000141
RAX: ffffffffffffffda RBX: 00007fd975816090 RCX: 00007fd97559aeb9
RDX: 0000000000000094 RSI: 00002000000004c0 RDI: 0000000000000005
RBP: 00007fd975608c1f R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fd975816128 R14: 00007fd975816090 R15: 00007fff05a2a1a8
 </TASK>
net_ratelimit: 432 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:94:9a:d2:6c:57, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:94:9a:d2:6c:57, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:94:9a:d2:6c:57, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:94:9a:d2:6c:57, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:94:9a:d2:6c:57, vlan:0)
net_ratelimit: 5631 callbacks suppressed
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:94:9a:d2:6c:57, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:94:9a:d2:6c:57, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:94:9a:d2:6c:57, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:94:9a:d2:6c:57, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:8e:94:9a:d2:6c:57, vlan:0)

Crashes (8):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/01/27 18:09 linux-6.6.y cbb31f77b879 9a514c2f .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf INFO: rcu detected stall in br_handle_frame
2026/01/18 22:46 linux-6.6.y cbb31f77b879 20d37d28 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf INFO: rcu detected stall in br_handle_frame
2026/01/04 19:22 linux-6.6.y 5fa4793a2d2d d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf INFO: rcu detected stall in br_handle_frame
2025/12/29 21:27 linux-6.6.y 5fa4793a2d2d d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf INFO: rcu detected stall in br_handle_frame
2025/12/04 10:48 linux-6.6.y 4791134e4aeb d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf INFO: rcu detected stall in br_handle_frame
2025/12/01 00:32 linux-6.6.y 1e89a1be4fe9 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf INFO: rcu detected stall in br_handle_frame
2025/11/26 09:45 linux-6.6.y 1e89a1be4fe9 64219f15 .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf INFO: rcu detected stall in br_handle_frame
2025/11/21 04:48 linux-6.6.y 0a805b6ea8cd 2cc4c24a .config console log report info [disk image] [vmlinux] [kernel image] ci2-linux-6-6-kasan-perf INFO: rcu detected stall in br_handle_frame
* Struck through repros no longer work on HEAD.