syzbot


INFO: rcu detected stall in sys_execveat

Status: auto-obsoleted due to no activity on 2025/11/19 05:21
Subsystems: mm
[Documentation on labels]
First crash: 165d, last: 165d

Sample crash report:
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:1e:e7:ee:4d:29:7a, vlan:0)
rcu: INFO: rcu_preempt detected stalls on CPUs/tasks:
rcu: 	Tasks blocked on level-0 rcu_node (CPUs 0-1): P9615/1:b..l
rcu: 	(detected by 1, t=10502 jiffies, g=37421, q=1411 ncpus=1)
task:syz.3.765       state:R  running task     stack:26176 pid:9615  tgid:9614  ppid:5860   task_flags:0x400040 flags:0x00004002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5357 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6961
 preempt_schedule_irq+0x51/0x90 kernel/sched/core.c:7288
 irqentry_exit+0x36/0x90 kernel/entry/common.c:197
 asm_sysvec_apic_timer_interrupt+0x1a/0x20 arch/x86/include/asm/idtentry.h:702
RIP: 0010:unwind_next_frame+0xd78/0x20a0 arch/x86/kernel/unwind_orc.c:591
Code: 00 0f 85 c6 12 00 00 4c 89 e2 4d 89 75 38 48 b8 00 00 00 00 00 fc ff df 48 c1 ea 03 80 3c 02 00 0f 85 86 12 00 00 49 8d 7d 58 <49> c7 45 50 00 00 00 00 48 b8 00 00 00 00 00 fc ff df 48 89 fa 48
RSP: 0018:ffffc900046deac0 EFLAGS: 00000246
RAX: dffffc0000000000 RBX: 0000000000000001 RCX: ffffffff9140cf58
RDX: 1ffff920008dbd7a RSI: 1ffff920008dbd79 RDI: ffffc900046debd8
RBP: ffffc900046debc8 R08: ffffffff9140cf5c R09: 0000000000000000
R10: ffffc900046deb80 R11: 0000000000006ada R12: ffffc900046debd0
R13: ffffc900046deb80 R14: ffffc900046deb80 R15: ffffc900046debb4
 __unwind_start+0x45f/0x7f0 arch/x86/kernel/unwind_orc.c:758
 unwind_start arch/x86/include/asm/unwind.h:64 [inline]
 arch_stack_walk+0x73/0x100 arch/x86/kernel/stacktrace.c:24
 stack_trace_save+0x8e/0xc0 kernel/stacktrace.c:122
 kasan_save_stack+0x33/0x60 mm/kasan/common.c:47
 kasan_save_track+0x14/0x30 mm/kasan/common.c:68
 poison_kmalloc_redzone mm/kasan/common.c:388 [inline]
 __kasan_kmalloc+0xaa/0xb0 mm/kasan/common.c:405
 kmalloc_noprof include/linux/slab.h:905 [inline]
 add_stack_record_to_list mm/page_owner.c:172 [inline]
 inc_stack_record_count mm/page_owner.c:214 [inline]
 __set_page_owner+0x32e/0x550 mm/page_owner.c:333
 set_page_owner include/linux/page_owner.h:32 [inline]
 post_alloc_hook+0x1c0/0x230 mm/page_alloc.c:1851
 prep_new_page mm/page_alloc.c:1859 [inline]
 get_page_from_freelist+0x132b/0x38e0 mm/page_alloc.c:3858
 __alloc_frozen_pages_noprof+0x261/0x23f0 mm/page_alloc.c:5148
 alloc_pages_mpol+0x1fb/0x550 mm/mempolicy.c:2416
 alloc_slab_page mm/slub.c:2487 [inline]
 allocate_slab mm/slub.c:2655 [inline]
 new_slab+0x247/0x330 mm/slub.c:2709
 ___slab_alloc+0xcf2/0x1740 mm/slub.c:3891
 __slab_alloc.constprop.0+0x56/0xb0 mm/slub.c:3981
 __slab_alloc_node mm/slub.c:4056 [inline]
 slab_alloc_node mm/slub.c:4217 [inline]
 kmem_cache_alloc_node_noprof+0xf5/0x3b0 mm/slub.c:4281
 __alloc_skb+0x2b2/0x380 net/core/skbuff.c:659
 alloc_skb include/linux/skbuff.h:1336 [inline]
 nlmsg_new include/net/netlink.h:1055 [inline]
 audit_buffer_alloc kernel/audit.c:1795 [inline]
 audit_log_start+0x2ea/0x7f0 kernel/audit.c:1913
 integrity_audit_message+0x10c/0x580 security/integrity/integrity_audit.c:47
 integrity_audit_msg+0x41/0x60 security/integrity/integrity_audit.c:32
 ima_store_measurement+0x3b6/0x5c0 security/integrity/ima/ima_api.c:378
 process_measurement+0x1ddb/0x23e0 security/integrity/ima/ima_main.c:413
 ima_bprm_check+0xe7/0x210 security/integrity/ima/ima_main.c:580
 ima_bprm_creds_for_exec+0x54/0x70 security/integrity/ima/ima_main.c:615
 security_bprm_creds_for_exec+0xca/0x1e0 security/security.c:1261
 bprm_execve fs/exec.c:1750 [inline]
 bprm_execve+0x470/0x1640 fs/exec.c:1730
 do_execveat_common.isra.0+0x4a5/0x610 fs/exec.c:1860
 do_execveat fs/exec.c:1945 [inline]
 __do_sys_execveat fs/exec.c:2019 [inline]
 __se_sys_execveat fs/exec.c:2013 [inline]
 __x64_sys_execveat+0xda/0x120 fs/exec.c:2013
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xcd/0x490 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f98c638ebe9
RSP: 002b:00007f98c72bb038 EFLAGS: 00000246 ORIG_RAX: 0000000000000142
RAX: ffffffffffffffda RBX: 00007f98c65b5fa0 RCX: 00007f98c638ebe9
RDX: 0000000000000000 RSI: 0000200000000040 RDI: 0000000000000003
RBP: 00007f98c72bb090 R08: 0000000000011000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000002
R13: 00007f98c65b6038 R14: 00007f98c65b5fa0 R15: 00007ffd1b8d3c48
 </TASK>
rcu: rcu_preempt kthread starved for 216 jiffies! g37421 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=1
rcu: 	Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior.
rcu: RCU grace-period kthread stack dump:
task:rcu_preempt     state:R  running task     stack:27928 pid:16    tgid:16    ppid:2      task_flags:0x208040 flags:0x00004000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5357 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6961
 __schedule_loop kernel/sched/core.c:7043 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7058
 schedule_timeout+0x123/0x290 kernel/time/sleep_timeout.c:99
 rcu_gp_fqs_loop+0x1ea/0xb00 kernel/rcu/tree.c:2083
 rcu_gp_kthread+0x270/0x380 kernel/rcu/tree.c:2285
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
rcu: Stack dump where RCU GP kthread last ran:
CPU: 1 UID: 0 PID: 3423 Comm: kworker/R-bat_e Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 07/12/2025
Workqueue: bat_events batadv_tt_purge
RIP: 0010:orc_ip arch/x86/kernel/unwind_orc.c:80 [inline]
RIP: 0010:__orc_find+0x83/0xf0 arch/x86/kernel/unwind_orc.c:102
Code: 02 48 01 f2 48 d1 fa 48 8d 5c 95 00 48 89 da 48 c1 ea 03 0f b6 34 0a 48 89 da 83 e2 07 83 c2 03 40 38 f2 7c 05 40 84 f6 75 4b <48> 63 13 48 01 da 49 39 d5 73 af 4c 8d 63 fc 49 39 ec 73 b2 4d 29
RSP: 0018:ffffc90000a07c68 EFLAGS: 00000246
RAX: ffffffff914f3af0 RBX: ffffffff90c62880 RCX: dffffc0000000000
RDX: 0000000000000003 RSI: 0000000000000000 RDI: ffffffff90c62860
RBP: ffffffff90c62860 R08: ffffffff914f3b56 R09: 0000000000000000
R10: ffffc90000a07d18 R11: 0000000000012035 R12: ffffffff90c628a0
R13: ffffffff822035b3 R14: ffffffff90c62860 R15: ffffffff90c62860
FS:  0000000000000000(0000) GS:ffff8881247c4000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055557f62c5c8 CR3: 0000000028ce2000 CR4: 00000000003526f0
Call Trace:
 <IRQ>
 orc_find arch/x86/kernel/unwind_orc.c:227 [inline]
 unwind_next_frame+0x2ec/0x20a0 arch/x86/kernel/unwind_orc.c:494
 arch_stack_walk+0x94/0x100 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x8e/0xc0 kernel/stacktrace.c:122
 kasan_save_stack+0x33/0x60 mm/kasan/common.c:47
 kasan_save_track+0x14/0x30 mm/kasan/common.c:68
 unpoison_slab_object mm/kasan/common.c:330 [inline]
 __kasan_slab_alloc+0x89/0x90 mm/kasan/common.c:356
 kasan_slab_alloc include/linux/kasan.h:250 [inline]
 slab_post_alloc_hook mm/slub.c:4180 [inline]
 slab_alloc_node mm/slub.c:4229 [inline]
 kmem_cache_alloc_noprof+0x1cb/0x3b0 mm/slub.c:4236
 skb_ext_maybe_cow net/core/skbuff.c:6994 [inline]
 skb_ext_add+0xf8/0x7a0 net/core/skbuff.c:7068
 nf_bridge_unshare net/bridge/br_netfilter_hooks.c:169 [inline]
 br_nf_forward_ip.part.0+0x28/0x810 net/bridge/br_netfilter_hooks.c:684
 br_nf_forward_ip net/bridge/br_netfilter_hooks.c:679 [inline]
 br_nf_forward+0xf0f/0x1be0 net/bridge/br_netfilter_hooks.c:776
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_slow+0xbb/0x200 net/netfilter/core.c:623
 nf_hook+0x45e/0x780 include/linux/netfilter.h:273
 NF_HOOK include/linux/netfilter.h:316 [inline]
 __br_forward+0x1be/0x5b0 net/bridge/br_forward.c:115
 deliver_clone net/bridge/br_forward.c:131 [inline]
 br_flood+0x39c/0x650 net/bridge/br_forward.c:249
 br_handle_frame_finish+0xf2d/0x1ca0 net/bridge/br_input.c:221
 br_nf_hook_thresh+0x304/0x410 net/bridge/br_netfilter_hooks.c:1170
 br_nf_pre_routing_finish_ipv6+0x76a/0xfb0 net/bridge/br_netfilter_ipv6.c:154
 NF_HOOK include/linux/netfilter.h:318 [inline]
 br_nf_pre_routing_ipv6+0x3cd/0x8c0 net/bridge/br_netfilter_ipv6.c:184
 br_nf_pre_routing+0x860/0x15b0 net/bridge/br_netfilter_hooks.c:508
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:283 [inline]
 br_handle_frame+0xad8/0x14b0 net/bridge/br_input.c:434
 __netif_receive_skb_core.constprop.0+0xa25/0x48c0 net/core/dev.c:5866
 __netif_receive_skb_one_core+0xb0/0x1e0 net/core/dev.c:5977
 __netif_receive_skb+0x1d/0x160 net/core/dev.c:6092
 process_backlog+0x442/0x15e0 net/core/dev.c:6444
 __napi_poll.constprop.0+0xba/0x550 net/core/dev.c:7494
 napi_poll net/core/dev.c:7557 [inline]
 net_rx_action+0xa9f/0xfe0 net/core/dev.c:7684
 handle_softirqs+0x219/0x8e0 kernel/softirq.c:579
 do_softirq kernel/softirq.c:480 [inline]
 do_softirq+0xb2/0xf0 kernel/softirq.c:467
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x100/0x120 kernel/softirq.c:407
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 batadv_tt_global_purge net/batman-adv/translation-table.c:2250 [inline]
 batadv_tt_purge+0x25f/0xb80 net/batman-adv/translation-table.c:3510
 process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3236
 process_scheduled_works kernel/workqueue.c:3319 [inline]
 rescuer_thread+0x620/0xea0 kernel/workqueue.c:3496
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x5d4/0x6f0 arch/x86/kernel/process.c:148
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
net_ratelimit: 10747 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:1e:e7:ee:4d:29:7a, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:1e:e7:ee:4d:29:7a, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:1e:e7:ee:4d:29:7a, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
net_ratelimit: 16099 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:1e:e7:ee:4d:29:7a, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:1b, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:1e:e7:ee:4d:29:7a, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)
bridge0: received packet on bridge_slave_0 with own address as source address (addr:aa:aa:aa:aa:aa:0c, vlan:0)

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/08/21 05:17 upstream 41cd3fd15263 0b9605c8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: rcu detected stall in sys_execveat
* Struck through repros no longer work on HEAD.