syzbot


INFO: task hung in raw_release (4)

Status: auto-obsoleted due to no activity on 2026/04/08 17:05
Subsystems: can
[Documentation on labels]
First crash: 192d, last: 98d
Similar bugs (7)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
linux-4.19 INFO: task hung in raw_release 1 1 2144d 2144d 0/1 auto-closed as invalid on 2020/09/30 10:18
upstream INFO: task hung in raw_release can 1 1388 1753d 2291d 0/29 closed as dup on 2021/06/26 09:46
upstream INFO: task hung in raw_release (2) can 1 47 628d 674d 0/29 closed as invalid on 2024/08/26 16:08
linux-5.15 INFO: task hung in raw_release 1 1 680d 680d 0/3 auto-obsoleted due to no activity on 2024/09/13 05:01
linux-6.1 INFO: task hung in raw_release 1 3 677d 692d 0/3 auto-obsoleted due to no activity on 2024/09/16 15:07
linux-4.19 INFO: task hung in raw_release (2) 1 syz error 4 1829d 1934d 0/1 upstream: reported syz repro on 2020/12/29 11:35
upstream INFO: task hung in raw_release (3) can 1 13 451d 564d 0/29 auto-obsoleted due to no activity on 2025/04/20 21:39

Sample crash report:
INFO: task syz.0.3730:21524 blocked for more than 143 seconds.
      Tainted: G     U              syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.3730      state:D stack:26408 pid:21524 tgid:21523 ppid:15013  task_flags:0x400140 flags:0x00080003
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1190/0x5de0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0xe7/0x3a0 kernel/sched/core.c:7026
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7083
 __mutex_lock_common kernel/locking/mutex.c:676 [inline]
 __mutex_lock+0x818/0x1060 kernel/locking/mutex.c:760
 raw_release+0x1f3/0xc00 net/can/raw.c:418
 __sock_release+0xb3/0x270 net/socket.c:662
 sock_close+0x1c/0x30 net/socket.c:1455
 __fput+0x402/0xb70 fs/file_table.c:468
 task_work_run+0x150/0x240 kernel/task_work.c:227
 get_signal+0x1d0/0x26d0 kernel/signal.c:2807
 arch_do_signal_or_restart+0x8f/0x790 arch/x86/kernel/signal.c:337
 exit_to_user_mode_loop+0x85/0x130 kernel/entry/common.c:40
 exit_to_user_mode_prepare include/linux/irq-entry-common.h:225 [inline]
 syscall_exit_to_user_mode_work include/linux/entry-common.h:175 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:210 [inline]
 do_syscall_64+0x426/0xfa0 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f0aaa58f749
RSP: 002b:00007f0aab426038 EFLAGS: 00000246 ORIG_RAX: 0000000000000017
RAX: 0000000000000003 RBX: 00007f0aaa7e5fa0 RCX: 00007f0aaa58f749
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000006
RBP: 00007f0aaa613f91 R08: 0000000000000000 R09: 0000000000000000
R10: 00002000000002c0 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f0aaa7e6038 R14: 00007f0aaa7e5fa0 R15: 00007ffe36daba28
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/0:1/10:
 #0: ffff88813ff16948 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3238
 #1: ffffc900000f7d00 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3239
 #2: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: reg_check_chans_work+0x91/0x11f0 net/wireless/reg.c:2453
1 lock held by khungtaskd/31:
 #0: ffffffff8e3c45e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e3c45e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8e3c45e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x36/0x1c0 kernel/locking/lockdep.c:6775
4 locks held by kworker/1:2/939:
 #0: ffff88805b2fe148 ((wq_completion)wg-kex-wg1#10){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3238
 #1: ffffc90003ac7d00 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __asm__ ("" : "=r"(__ptr) : "0"((__typeof__(*((worker))) *)(( unsigned long)((worker))))); (typeof((__typeof__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3239
 #2: ffff88805c235308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x1c2/0x880 drivers/net/wireguard/noise.c:598
 #3: ffff88805c24dc60 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x5ac/0x880 drivers/net/wireguard/noise.c:632
3 locks held by kworker/0:4/5885:
4 locks held by kworker/1:4/5899:
 #0: ffff88807e303d48 ((wq_completion)wg-kex-wg2#10){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3238
 #1: ffffc900049c7d00 ((work_completion)(&({ do { const void *__vpp_verify = (typeof((worker) + 0))((void *)0); (void)__vpp_verify; } while (0); ({ unsigned long __ptr; __asm__ ("" : "=r"(__ptr) : "0"((__typeof__(*((worker))) *)(( unsigned long)((worker))))); (typeof((__typeof__(*((worker))) *)(( unsigned long)((worker))))) (__ptr + (((__per_cpu_offset[(cpu)])))); }); })->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3239
 #2: ffff888029151308 (&wg->static_identity.lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x1c2/0x880 drivers/net/wireguard/noise.c:598
 #3: ffff88805c24f030 (&handshake->lock){++++}-{4:4}, at: wg_noise_handshake_consume_initiation+0x5ac/0x880 drivers/net/wireguard/noise.c:632
3 locks held by kworker/0:5/5939:
 #0: ffff88813ff15948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3238
 #1: ffffc9000ab4fd00 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3239
 #2: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
2 locks held by kworker/u10:0/8553:
3 locks held by kworker/u10:2/8562:
3 locks held by kworker/u10:3/8564:
3 locks held by kworker/u10:5/8667:
4 locks held by kworker/u10:7/11872:
5 locks held by kworker/u10:8/12077:
4 locks held by kworker/u10:13/12084:
3 locks held by kworker/u10:14/12085:
3 locks held by kworker/u10:15/12086:
 #0: ffff888030321948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3238
 #1: ffffc90004557d00 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3239
 #2: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_verify_work+0x12/0x30 net/ipv6/addrconf.c:4734
4 locks held by kworker/u10:20/12094:
4 locks held by kworker/u10:21/12096:
3 locks held by kworker/u10:24/12446:
4 locks held by kworker/u10:26/12448:
3 locks held by kworker/u10:32/12456:
4 locks held by kworker/u10:37/12463:
6 locks held by kworker/u10:38/12464:
 #0: ffff88801c2dc948 ((wq_completion)writeback){+.+.}-{0:0}, at: process_one_work+0x12a2/0x1b70 kernel/workqueue.c:3238
 #1: ffffc90004a47d00 ((work_completion)(&(&wb->dwork)->work)){+.+.}-{0:0}, at: process_one_work+0x929/0x1b70 kernel/workqueue.c:3239
 #2: ffff8880346640e0 (&type->s_umount_key#32){++++}-{4:4}, at: super_trylock_shared+0x1e/0xf0 fs/super.c:562
 #3: ffff888034666b98 (&sbi->s_writepages_rwsem){++++}-{0:0}, at: do_writepages+0x27a/0x600 mm/page-writeback.c:2604
 #4: ffff88814dc10950 (jbd2_handle){++++}-{0:0}, at: start_this_handle+0x5e4/0x1410 fs/jbd2/transaction.c:444
 #5: ffff888076db3240 (&ei->i_data_sem){++++}-{4:4}, at: ext4_map_blocks+0x46f/0x1400 fs/ext4/inode.c:810
3 locks held by kworker/u10:39/12465:
4 locks held by kworker/u10:41/14015:
5 locks held by kworker/u10:42/14016:
4 locks held by kworker/u10:44/14018:
3 locks held by kworker/u10:47/14021:
3 locks held by kworker/u10:50/14060:
4 locks held by kworker/u10:51/14061:
3 locks held by kworker/u10:61/14071:
4 locks held by kworker/u10:63/14073:
1 lock held by syz.4.2784/17590:
 #0: 
ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x230 drivers/net/tun.c:3436
2 locks held by syz.0.3730/21524:
 #0: ffff888076dc5288 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:980 [inline]
 #0: ffff888076dc5288 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: __sock_release+0x86/0x270 net/socket.c:661
 #1: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: raw_release+0x1f3/0xc00 net/can/raw.c:418
1 lock held by syz.6.3731/21528:
 #0: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: tun_detach drivers/net/tun.c:634 [inline]
 #0: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: tun_chr_close+0x38/0x230 drivers/net/tun.c:3436
2 locks held by syz.5.3732/21535:
 #0: ffffffff900d4fd0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x2d6/0x690 net/core/net_namespace.c:576
 #1: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #1: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: register_netdevice_notifier_net+0x23/0xb0 net/core/dev.c:2082
1 lock held by syz-executor/21582:
 #0: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x30c/0x1540 net/ipv4/devinet.c:978
1 lock held by syz-executor/21584:
 #0: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x30c/0x1540 net/ipv4/devinet.c:978
1 lock held by kworker/0:2/21587:
1 lock held by syz-executor/21591:
 #0: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x30c/0x1540 net/ipv4/devinet.c:978
1 lock held by syz-executor/21594:
 #0: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff900eb408 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x30c/0x1540 net/ipv4/devinet.c:978

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Tainted: G     U              syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x116/0x1f0 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x27b/0x390 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x29c/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline]
 watchdog+0xf3f/0x1170 kernel/hung_task.c:495
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x675/0x7d0 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 12085 Comm: kworker/u10:14 Tainted: G     U              syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Workqueue: wg-kex-wg0 wg_packet_handshake_send_worker
RIP: 0010:__sanitizer_cov_trace_pc+0x0/0x70 kernel/kcov.c:210
Code: 46 81 56 00 48 89 df 5b e9 ed 2e 5c 00 be 03 00 00 00 5b e9 c2 93 e5 02 66 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 <f3> 0f 1e fa 48 8b 34 24 65 48 8b 15 78 d8 e4 11 65 8b 05 89 d8 e4
RSP: 0018:ffffc90000006a30 EFLAGS: 00000246
RAX: 0000000000000000 RBX: ffff88809661e640 RCX: ffffffff893160da
RDX: ffff888059be9e40 RSI: ffffffff8931524d RDI: 0000000000000005
RBP: 0000000000000001 R08: 0000000000000005 R09: 0000000000000000
R10: 0000000000000001 R11: 0000000000000001 R12: ffff88809661e6d0
R13: 0000000000000000 R14: ffff88809661e698 R15: ffff888033296000
FS:  0000000000000000(0000) GS:ffff888124a0d000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 0000001b2fa00ff8 CR3: 000000000e182000 CR4: 00000000003526f0
Call Trace:
 <IRQ>
 skb_end_pointer include/linux/skbuff.h:1724 [inline]
 qdisc_pkt_len_init net/core/dev.c:4066 [inline]
 __dev_queue_xmit+0x302/0x4490 net/core/dev.c:4692
 dev_queue_xmit include/linux/netdevice.h:3365 [inline]
 br_dev_queue_push_xmit+0x272/0x8a0 net/bridge/br_forward.c:53
 br_nf_dev_queue_xmit+0x6e0/0x2b20 net/bridge/br_netfilter_hooks.c:920
 NF_HOOK include/linux/netfilter.h:318 [inline]
 NF_HOOK include/linux/netfilter.h:312 [inline]
 br_nf_post_routing+0x8e7/0x1190 net/bridge/br_netfilter_hooks.c:966
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_slow+0xbe/0x200 net/netfilter/core.c:623
 nf_hook+0x45e/0x780 include/linux/netfilter.h:273
 NF_HOOK include/linux/netfilter.h:316 [inline]
 br_forward_finish+0xcd/0x130 net/bridge/br_forward.c:66
 br_nf_hook_thresh+0x307/0x410 net/bridge/br_netfilter_hooks.c:1167
 br_nf_forward_finish+0x66a/0xba0 net/bridge/br_netfilter_hooks.c:662
 NF_HOOK include/linux/netfilter.h:318 [inline]
 NF_HOOK include/linux/netfilter.h:312 [inline]
 br_nf_forward_ip.part.0+0x609/0x810 net/bridge/br_netfilter_hooks.c:716
 br_nf_forward_ip net/bridge/br_netfilter_hooks.c:676 [inline]
 br_nf_forward+0xf0f/0x1be0 net/bridge/br_netfilter_hooks.c:773
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_slow+0xbe/0x200 net/netfilter/core.c:623
 nf_hook+0x45e/0x780 include/linux/netfilter.h:273
 NF_HOOK include/linux/netfilter.h:316 [inline]
 __br_forward+0x1be/0x5b0 net/bridge/br_forward.c:115
 deliver_clone net/bridge/br_forward.c:131 [inline]
 br_flood+0x39c/0x650 net/bridge/br_forward.c:250
 br_handle_frame_finish+0x1117/0x1f00 net/bridge/br_input.c:229
 br_nf_hook_thresh+0x307/0x410 net/bridge/br_netfilter_hooks.c:1167
 br_nf_pre_routing_finish_ipv6+0x76a/0xfc0 net/bridge/br_netfilter_ipv6.c:154
 NF_HOOK include/linux/netfilter.h:318 [inline]
 br_nf_pre_routing_ipv6+0x3cd/0x8c0 net/bridge/br_netfilter_ipv6.c:184
 br_nf_pre_routing+0x860/0x15b0 net/bridge/br_netfilter_hooks.c:508
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:291 [inline]
 br_handle_frame+0xb28/0x14e0 net/bridge/br_input.c:442
 __netif_receive_skb_core.constprop.0+0xa25/0x4bd0 net/core/dev.c:5966
 __netif_receive_skb_one_core+0xb0/0x1e0 net/core/dev.c:6077
 __netif_receive_skb+0x1d/0x160 net/core/dev.c:6192
 process_backlog+0x439/0x15e0 net/core/dev.c:6544
 __napi_poll.constprop.0+0xba/0x550 net/core/dev.c:7594
 napi_poll net/core/dev.c:7657 [inline]
 net_rx_action+0x97f/0xef0 net/core/dev.c:7784
 handle_softirqs+0x219/0x8e0 kernel/softirq.c:622
 do_softirq kernel/softirq.c:523 [inline]
 do_softirq+0xb2/0xf0 kernel/softirq.c:510
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x100/0x120 kernel/softirq.c:450
 local_bh_enable include/linux/bottom_half.h:33 [inline]
 fpregs_unlock arch/x86/include/asm/fpu/api.h:77 [inline]
 kernel_fpu_end+0x5e/0x70 arch/x86/kernel/fpu/core.c:479
 blake2s_compress+0x7b/0xe0 lib/crypto/x86/blake2s.h:44
 blake2s_final+0xc9/0x150 lib/crypto/blake2s.c:148
 hmac.constprop.0+0x252/0x420 drivers/net/wireguard/noise.c:325
 kdf.constprop.0+0x1a1/0x280 drivers/net/wireguard/noise.c:375
 mix_dh+0xd2/0x130 drivers/net/wireguard/noise.c:413
 wg_noise_handshake_create_initiation+0x337/0x610 drivers/net/wireguard/noise.c:550
 wg_packet_send_handshake_initiation+0x19a/0x360 drivers/net/wireguard/send.c:34
 wg_packet_handshake_send_worker+0x1c/0x30 drivers/net/wireguard/send.c:51
 process_one_work+0x9cf/0x1b70 kernel/workqueue.c:3263
 process_scheduled_works kernel/workqueue.c:3346 [inline]
 worker_thread+0x6c8/0xf10 kernel/workqueue.c:3427
 kthread+0x3c5/0x780 kernel/kthread.c:463
 ret_from_fork+0x675/0x7d0 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
net_ratelimit: 10312 callbacks suppressed
bridge0: received packet on veth0_to_bridge with own address as source address (addr:3e:69:3c:a8:e2:db, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:3e:69:3c:a8:e2:db, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:3e:69:3c:a8:e2:db, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:3e:69:3c:a8:e2:db, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:3e:69:3c:a8:e2:db, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:3e:69:3c:a8:e2:db, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:3e:69:3c:a8:e2:db, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:3e:69:3c:a8:e2:db, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:3e:69:3c:a8:e2:db, vlan:0)
bridge0: received packet on veth0_to_bridge with own address as source address (addr:3e:69:3c:a8:e2:db, vlan:0)

Crashes (4):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/11/26 18:59 upstream 30f09200cc4a c116feb4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in raw_release
2025/10/06 23:26 upstream fd94619c4336 91305dbe .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in raw_release
2026/01/08 16:59 net-next 957346a6877b d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in raw_release
2026/01/06 05:41 net-next 32291cb0369a d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-kasan-gce INFO: task hung in raw_release
* Struck through repros no longer work on HEAD.