syzbot


INFO: task hung in cangw_pernet_exit_batch (5)

Status: upstream: reported syz repro on 2025/12/18 20:12
Subsystems: can
[Documentation on labels]
Reported-by: syzbot+6461a4c663b104fd1169@syzkaller.appspotmail.com
First crash: 104d, last: 7d07h
Cause bisection: failed (error log, bisect log)
  
Fix bisection: failed (error log, bisect log)
  
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [can?] INFO: task hung in cangw_pernet_exit_batch (5) 1 (2) 2025/12/20 08:51
Similar bugs (6)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in cangw_pernet_exit_batch (2) can 1 16 974d 1232d 0/29 auto-obsoleted due to no activity on 2023/10/19 02:51
upstream INFO: task hung in cangw_pernet_exit_batch (3) can 1 33 620d 636d 26/29 fixed on 2024/07/09 19:14
upstream INFO: task hung in cangw_pernet_exit_batch (4) can 1 24 447d 582d 0/29 auto-obsoleted due to no activity on 2025/03/20 17:55
upstream INFO: task hung in cangw_pernet_exit_batch can 1 11 1349d 1362d 0/29 auto-obsoleted due to no activity on 2022/10/09 07:17
linux-6.1 INFO: task hung in cangw_pernet_exit_batch (2) 1 18 641d 668d 0/3 auto-obsoleted due to no activity on 2024/08/27 12:37
linux-6.1 INFO: task hung in cangw_pernet_exit_batch 1 2 1020d 1039d 0/3 auto-obsoleted due to no activity on 2023/09/13 14:11
Last patch testing requests (2)
Created Duration User Patch Repo Result
2025/12/28 20:06 2h18m retest repro net-next error
2025/12/28 20:37 1h39m retest repro net-next OK log

Sample crash report:
INFO: task kworker/u8:7:1105 blocked for more than 144 seconds.
      Tainted: G             L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:7    state:D stack:25608 pid:1105  tgid:1105  ppid:2      task_flags:0x4208060 flags:0x00080000
Workqueue: netns cleanup_net
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6908
 __schedule_loop kernel/sched/core.c:6990 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7005
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7062
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 cangw_pernet_exit_batch+0x15/0xa0 net/can/gw.c:1294
 ops_exit_list net/core/net_namespace.c:205 [inline]
 ops_undo_list+0x363/0xab0 net/core/net_namespace.c:252
 cleanup_net+0x499/0x920 net/core/net_namespace.c:704
 process_one_work+0xa23/0x19a0 kernel/workqueue.c:3276
 process_scheduled_works kernel/workqueue.c:3359 [inline]
 worker_thread+0x5ef/0xe50 kernel/workqueue.c:3440
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Showing all locks held in the system:
2 locks held by kthreadd/2:
1 lock held by kworker/R-kvfre/6:
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2691
2 locks held by kworker/0:1/10:
3 locks held by kworker/u8:0/12:
3 locks held by kworker/u8:1/13:
1 lock held by kworker/R-mm_pe/14:
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2749 [inline]
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xb7b/0x14a0 kernel/workqueue.c:3611
2 locks held by kworker/1:0/24:
1 lock held by khungtaskd/31:
 #0: ffffffff8e7e7420 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e7420 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e7420 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
5 locks held by kworker/u8:2/36:
3 locks held by kworker/u8:3/49:
4 locks held by kworker/u8:4/58:
 #0: ffff88803669b948 ((wq_completion)wg-kex-wg2){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc9000210fd08 ((work_completion)(&peer->transmit_handshake_work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffff8880257fd348 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_create_initiation+0xec/0x610 drivers/net/wireguard/noise.c:529
 #3: ffff8880645ac890 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_create_initiation+0x100/0x610 drivers/net/wireguard/noise.c:530
3 locks held by kworker/u8:5/132:
2 locks held by kworker/0:2/796:
1 lock held by kworker/R-bond0/1077:
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2691
4 locks held by kworker/u8:7/1105:
 #0: ffff88801c6ae948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc90004c6fd08 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff905fb9d0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xb8/0x920 net/core/net_namespace.c:675
 #3: ffffffff90614228 (rtnl_mutex){+.+.}-{4:4}, at: cangw_pernet_exit_batch+0x15/0xa0 net/can/gw.c:1294
3 locks held by kworker/u8:9/2991:
3 locks held by kworker/u8:10/3009:
3 locks held by kworker/R-ipv6_/3186:
 #0: ffff88803399d948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc9000f51fc70 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff90614228 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff90614228 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_verify_work+0x12/0x30 net/ipv6/addrconf.c:4738
2 locks held by kworker/R-bat_e/3411:
1 lock held by kworker/R-ext4-/5158:
1 lock held by klogd/5184:
2 locks held by udevd/5195:
1 lock held by dhcpcd/5489:
4 locks held by dhcpcd/5490:
1 lock held by crond/5568:
2 locks held by getty/5584:
 #0: ffff8880387740a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
1 lock held by syz-executor/5812:
1 lock held by syz-executor/5824:
3 locks held by kworker/0:3/5830:
 #0: ffff888027c39548 ((wq_completion)wg-kex-wg2#4){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc90003b07d08 ((work_completion)(&({ do { const void __seg_gs *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __asm__ ("" : "=r"(__ptr) : "0"((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker))))); (typeof((__typeof_unqual__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffff88802dbbd278 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_begin_session+0x30/0xe40 drivers/net/wireguard/noise.c:822
1 lock held by kworker/R-wg-cr/5853:
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2749 [inline]
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xb7b/0x14a0 kernel/workqueue.c:3611
1 lock held by kworker/R-wg-cr/5854:
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2749 [inline]
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xb7b/0x14a0 kernel/workqueue.c:3611
1 lock held by kworker/R-wg-cr/5856:
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2749 [inline]
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xb7b/0x14a0 kernel/workqueue.c:3611
1 lock held by kworker/R-wg-cr/5857:
 #0: ffffffff8e694d48
 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2691
1 lock held by kworker/R-wg-cr/5858:
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x27/0x420 kernel/workqueue.c:2691
1 lock held by kworker/R-wg-cr/5859:
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2749 [inline]
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xb7b/0x14a0 kernel/workqueue.c:3611
1 lock held by kworker/R-wg-cr/5860:
1 lock held by kworker/R-wg-cr/5861:
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2749 [inline]
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xb7b/0x14a0 kernel/workqueue.c:3611
1 lock held by kworker/R-wg-cr/5862:
1 lock held by kworker/R-wg-cr/5863:
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2749 [inline]
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xb7b/0x14a0 kernel/workqueue.c:3611
1 lock held by kworker/R-wg-cr/5864:
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2749 [inline]
 #0: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xb7b/0x14a0 kernel/workqueue.c:3611
3 locks held by kworker/0:4/5883:
3 locks held by kworker/0:5/5913:
3 locks held by kworker/1:6/5929:
3 locks held by kworker/1:7/5936:
2 locks held by syz.0.631/9311:
3 locks held by syz.3.676/9543:
4 locks held by syz.0.680/9556:
1 lock held by syz.0.680/9557:
6 locks held by syz.2.681/9563:
2 locks held by syz.3.679/9561:
3 locks held by kworker/u8:6/9575:
 #0: ffff88813fea4148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc900043f7d08 ((work_completion)(&pool->idle_cull_work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff8e694d48 (wq_pool_attach_mutex){+.+.}-{4:4}, at: idle_cull_fn+0x99/0x450 kernel/workqueue.c:2973
4 locks held by kworker/u8:8/9576:
4 locks held by kworker/u8:11/9577:
1 lock held by kworker/u8:12/9578:

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Tainted: G             L      syzkaller #0 PREEMPT(full) 
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 3411 Comm: kworker/R-bat_e Tainted: G             L      syzkaller #0 PREEMPT(full) 
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026
Workqueue: bat_events batadv_tt_purge
RIP: 0010:rcu_read_lock_held_common kernel/rcu/update.c:105 [inline]
RIP: 0010:rcu_read_lock_held+0x9/0x50 kernel/rcu/update.c:349
Code: 08 e9 65 fc ff ff 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa e8 07 6f a1 09 <ba> 01 00 00 00 85 c0 75 07 89 d0 c3 cc cc cc cc e8 d2 c8 00 00 84
RSP: 0018:ffffc90000a07f48 EFLAGS: 00000246
RAX: 0000000000000001 RBX: ffff888087b3e780 RCX: ffffffff8a5af642
RDX: ffff888034a7bd00 RSI: ffffffff8a5af827 RDI: ffff888034a7bd00
RBP: 1ffff92000140fec R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000000 R12: ffff888035d38600
R13: ffff888037330000 R14: 0000000000000000 R15: 0000000000000001
FS:  0000000000000000(0000) GS:ffff88812444a000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ffd7569586c CR3: 000000001eb00000 CR4: 00000000003526f0
Call Trace:
 <IRQ>
 nf_hook include/linux/netfilter.h:259 [inline]
 NF_HOOK include/linux/netfilter.h:316 [inline]
 br_forward_finish+0x3ec/0x4d0 net/bridge/br_forward.c:66
 br_nf_hook_thresh+0x30d/0x420 net/bridge/br_netfilter_hooks.c:1167
 br_nf_forward_finish+0x693/0xb30 net/bridge/br_netfilter_hooks.c:662
 NF_HOOK include/linux/netfilter.h:318 [inline]
 NF_HOOK include/linux/netfilter.h:312 [inline]
 br_nf_forward_ip.part.0+0x61e/0x820 net/bridge/br_netfilter_hooks.c:716
 br_nf_forward_ip net/bridge/br_netfilter_hooks.c:676 [inline]
 br_nf_forward+0xfe5/0x19f0 net/bridge/br_netfilter_hooks.c:773
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_slow+0xbf/0x220 net/netfilter/core.c:623
 nf_hook include/linux/netfilter.h:273 [inline]
 NF_HOOK include/linux/netfilter.h:316 [inline]
 __br_forward+0x2f6/0x970 net/bridge/br_forward.c:115
 deliver_clone net/bridge/br_forward.c:131 [inline]
 maybe_deliver+0xf0/0x180 net/bridge/br_forward.c:191
 br_flood+0x193/0x650 net/bridge/br_forward.c:238
 br_handle_frame_finish+0xff4/0x1f60 net/bridge/br_input.c:229
 br_nf_hook_thresh+0x30d/0x420 net/bridge/br_netfilter_hooks.c:1167
 br_nf_pre_routing_finish_ipv6+0x769/0xfb0 net/bridge/br_netfilter_ipv6.c:154
 NF_HOOK include/linux/netfilter.h:318 [inline]
 br_nf_pre_routing_ipv6+0x39c/0x8b0 net/bridge/br_netfilter_ipv6.c:184
 br_nf_pre_routing+0x90d/0x1550 net/bridge/br_netfilter_hooks.c:508
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:291 [inline]
 br_handle_frame+0xcdd/0x1520 net/bridge/br_input.c:442
 __netif_receive_skb_core.constprop.0+0x6c5/0x3550 net/core/dev.c:6051
 __netif_receive_skb_one_core+0xb0/0x1e0 net/core/dev.c:6162
 __netif_receive_skb+0x1f/0x120 net/core/dev.c:6277
 process_backlog+0x37a/0x1580 net/core/dev.c:6628
 __napi_poll.constprop.0+0xaf/0x450 net/core/dev.c:7692
 napi_poll net/core/dev.c:7755 [inline]
 net_rx_action+0xa40/0xf20 net/core/dev.c:7912
 handle_softirqs+0x1eb/0x9e0 kernel/softirq.c:622
 do_softirq kernel/softirq.c:523 [inline]
 do_softirq+0xac/0xe0 kernel/softirq.c:510
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0xf8/0x120 kernel/softirq.c:450
 spin_unlock_bh include/linux/spinlock.h:395 [inline]
 batadv_tt_global_purge net/batman-adv/translation-table.c:2250 [inline]
 batadv_tt_purge+0x25d/0xbd0 net/batman-adv/translation-table.c:3510
 process_one_work+0xa23/0x19a0 kernel/workqueue.c:3276
 process_scheduled_works kernel/workqueue.c:3359 [inline]
 rescuer_thread+0x905/0x14a0 kernel/workqueue.c:3583
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (4):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/03/14 10:49 upstream 1c9982b49613 ee8d34d6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in cangw_pernet_exit_batch
2026/03/07 10:40 net 55f854dd5bdd 5cb44a80 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in cangw_pernet_exit_batch
2025/12/14 19:59 net-next 8f7aa3d3c732 d6526ea3 .config console log report syz / log [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in cangw_pernet_exit_batch
2025/12/07 17:24 net-next 8f7aa3d3c732 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in cangw_pernet_exit_batch
* Struck through repros no longer work on HEAD.