syzbot


INFO: task hung in dev_map_free (3)

Status: upstream: reported on 2025/07/13 14:25
Subsystems: bpf net
[Documentation on labels]
Reported-by: syzbot+9bb2e1829da8582dcffa@syzkaller.appspotmail.com
First crash: 230d, last: 27d
Discussions (5)
Title Replies (including bot) Last reply
[syzbot] Monthly bpf report (Jan 2026) 0 (1) 2026/01/07 07:29
[syzbot] Monthly bpf report (Nov 2025) 0 (1) 2025/11/05 08:25
[syzbot] Monthly bpf report (Oct 2025) 0 (1) 2025/10/06 17:29
[syzbot] Monthly bpf report (Sep 2025) 0 (1) 2025/09/03 12:45
[syzbot] [bpf?] [net?] INFO: task hung in dev_map_free (3) 0 (1) 2025/07/13 14:25
Similar bugs (3)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in dev_map_free (2) bpf net 1 29 288d 335d 0/29 auto-obsoleted due to no activity on 2025/06/04 11:08
upstream INFO: task hung in dev_map_free net bpf 1 14 465d 482d 0/29 auto-obsoleted due to no activity on 2024/12/09 01:13
linux-5.15 INFO: task hung in dev_map_free 1 1 529d 529d 0/3 auto-obsoleted due to no activity on 2024/11/25 05:23

Sample crash report:
INFO: task kworker/u8:9:994 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:kworker/u8:9    state:D stack:20568 pid:994   tgid:994   ppid:2      task_flags:0x4208060 flags:0x00080000
Workqueue: events_unbound bpf_map_free_deferred
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5256 [inline]
 __schedule+0x145f/0x5070 kernel/sched/core.c:6863
 __schedule_loop kernel/sched/core.c:6945 [inline]
 rt_mutex_schedule+0x77/0xf0 kernel/sched/core.c:7241
 rt_mutex_slowlock_block+0x5ba/0x6d0 kernel/locking/rtmutex.c:1647
 __rt_mutex_slowlock kernel/locking/rtmutex.c:1721 [inline]
 __rt_mutex_slowlock_locked kernel/locking/rtmutex.c:1760 [inline]
 rt_mutex_slowlock+0x2a8/0x6b0 kernel/locking/rtmutex.c:1800
 __rt_mutex_lock kernel/locking/rtmutex.c:1815 [inline]
 __mutex_lock_common kernel/locking/rtmutex_api.c:534 [inline]
 mutex_lock_nested+0x16a/0x1d0 kernel/locking/rtmutex_api.c:552
 rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
 dev_map_free+0x11f/0x6a0 kernel/bpf/devmap.c:214
 bpf_map_free+0x19b/0x3f0 kernel/bpf/syscall.c:894
 process_one_work kernel/workqueue.c:3257 [inline]
 process_scheduled_works+0xad1/0x1770 kernel/workqueue.c:3340
 worker_thread+0x8a0/0xda0 kernel/workqueue.c:3421
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>

Showing all locks held in the system:
6 locks held by ksoftirqd/1/30:
1 lock held by khungtaskd/38:
 #0: ffffffff8d5ae940 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8d5ae940 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8d5ae940 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
2 locks held by kworker/u8:2/43:
 #0: ffff88801c723138 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88801c723138 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc90000b57bc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90000b57bc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
3 locks held by kworker/u8:9/994:
 #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc900048f7bc0 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc900048f7bc0 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
2 locks held by kworker/u8:10/1378:
 #0: ffff88801c723138 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88801c723138 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000532fbc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000532fbc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
6 locks held by kworker/u8:15/3533:
 #0: ffff888019ad4938 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff888019ad4938 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000ce8fbc0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000ce8fbc0 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffffffff8e898720 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xf7/0x7b0 net/core/net_namespace.c:670
 #3: ffff88804835c0d8 (&dev->mutex){....}-{4:4}, at: device_lock include/linux/device.h:895 [inline]
 #3: ffff88804835c0d8 (&dev->mutex){....}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:108 [inline]
 #3: ffff88804835c0d8 (&dev->mutex){....}-{4:4}, at: devlink_pernet_pre_exit+0x10a/0x3d0 net/devlink/core.c:506
 #4: ffff88804835d300 (&devlink->lock_key#2){+.+.}-{4:4}, at: devl_lock net/devlink/core.c:276 [inline]
 #4: ffff88804835d300 (&devlink->lock_key#2){+.+.}-{4:4}, at: devl_dev_lock net/devlink/devl_internal.h:109 [inline]
 #4: ffff88804835d300 (&devlink->lock_key#2){+.+.}-{4:4}, at: devlink_pernet_pre_exit+0x11c/0x3d0 net/devlink/core.c:506
 #5: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
2 locks held by kworker/u8:16/3582:
 #0: ffff88801c723138 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88801c723138 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000cf2fbc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000cf2fbc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
2 locks held by getty/5559:
 #0: ffff88814dbce0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003e8b2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x44f/0x1460 drivers/tty/n_tty.c:2211
2 locks held by kworker/u8:21/6059:
 #0: ffff88801c723138 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88801c723138 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc9000570fbc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc9000570fbc0 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
1 lock held by syz.4.133/6465:
 #0: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
3 locks held by kworker/u8:25/6488:
 #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc90005dffbc0 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90005dffbc0 ((work_completion)(&map->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
1 lock held by syz-executor/6500:
 #0: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
1 lock held by syz-executor/6542:
 #0: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
1 lock held by syz-executor/6545:
 #0: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
1 lock held by syz-executor/6560:
 #0: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
1 lock held by syz-executor/6639:
 #0: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
1 lock held by syz-executor/6672:
 #0: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
1 lock held by syz-executor/6675:
 #0: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
1 lock held by syz-executor/6696:
 #0: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
3 locks held by kworker/u8:27/6729:
 #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3232 [inline]
 #0: ffff88813ff69938 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x1770 kernel/workqueue.c:3340
 #1: ffffc90003ed7bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3233 [inline]
 #1: ffffc90003ed7bc0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x1770 kernel/workqueue.c:3340
 #2: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
1 lock held by syz-executor/6774:
 #0: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
1 lock held by syz-executor/6808:
 #0: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
1 lock held by syz-executor/6820:
 #0: ffffffff8d5b43b0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x4c/0x570 kernel/rcu/tree.c:3816
3 locks held by syz-executor/6838:
5 locks held by syz.3.213/6898:
 #0: ffffffff8e90dd60 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8e90db58 (genl_mutex){+.+.}-{4:4}, at: genl_lock net/netlink/genetlink.c:35 [inline]
 #1: ffffffff8e90db58 (genl_mutex){+.+.}-{4:4}, at: genl_op_lock net/netlink/genetlink.c:60 [inline]
 #1: ffffffff8e90db58 (genl_mutex){+.+.}-{4:4}, at: genl_rcv_msg+0x10d/0x790 net/netlink/genetlink.c:1209
 #2: ffff888142f00a48 (&nbd->config_lock){+.+.}-{4:4}, at: nbd_genl_connect+0x93e/0x1920 drivers/block/nbd.c:2132
 #3: ffff888142bc6550 (&q->q_usage_counter(io)#52){++++}-{0:0}, at: blk_mq_freeze_queue include/linux/blk-mq.h:954 [inline]
 #3: ffff888142bc6550 (&q->q_usage_counter(io)#52){++++}-{0:0}, at: nbd_add_socket+0x312/0xbb0 drivers/block/nbd.c:1262
 #4: ffff888142bc6588 (&q->q_usage_counter(queue)#36){+.+.}-{0:0}, at: blk_mq_freeze_queue include/linux/blk-mq.h:954 [inline]
 #4: ffff888142bc6588 (&q->q_usage_counter(queue)#36){+.+.}-{0:0}, at: nbd_add_socket+0x312/0xbb0 drivers/block/nbd.c:1262
2 locks held by syz.3.213/6899:
 #0: ffffffff8e90dd60 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8e90db58 (genl_mutex){+.+.}-{4:4}, at: genl_lock net/netlink/genetlink.c:35 [inline]
 #1: ffffffff8e90db58 (genl_mutex){+.+.}-{4:4}, at: genl_op_lock net/netlink/genetlink.c:60 [inline]
 #1: ffffffff8e90db58 (genl_mutex){+.+.}-{4:4}, at: genl_rcv_msg+0x10d/0x790 net/netlink/genetlink.c:1209
2 locks held by syz-executor/6907:
 #0: ffffffff8edb5c48 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8edb5c48 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8edb5c48 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8ec/0x1c90 net/core/rtnetlink.c:4071
1 lock held by syz-executor/6920:
 #0: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: inet6_rtm_newaddr+0x5b7/0xd20 net/ipv6/addrconf.c:5027
2 locks held by syz-executor/6930:
 #0: ffffffff8e898720 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x3cc/0x570 net/core/net_namespace.c:577
 #1: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock_killable include/linux/rtnetlink.h:145 [inline]
 #1: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: register_netdev+0x18/0x60 net/core/dev.c:11506
2 locks held by syz-executor/6932:
 #0: ffffffff8e898720 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x3cc/0x570 net/core/net_namespace.c:577
 #1: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock_killable include/linux/rtnetlink.h:145 [inline]
 #1: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: register_netdev+0x18/0x60 net/core/dev.c:11506
2 locks held by syz-executor/6937:
 #0: ffffffff8e898720 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x3cc/0x570 net/core/net_namespace.c:577
 #1: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock_killable include/linux/rtnetlink.h:145 [inline]
 #1: ffffffff8e8a5838 (rtnl_mutex){+.+.}-{4:4}, at: register_netdev+0x18/0x60 net/core/dev.c:11506

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 38 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xf95/0xfe0 kernel/hung_task.c:515
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 16 Comm: ktimers/0 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/25/2025
RIP: 0010:__lock_acquire+0xda/0x2cf0 kernel/locking/lockdep.c:5138
Code: c0 0f 84 76 13 00 00 49 89 d9 8b 0d f0 c1 21 17 48 8b 54 24 08 8b 92 88 0b 00 00 85 c9 0f 94 c1 83 fa 30 40 0f 93 c6 40 20 ce <40> 80 fe 01 0f 84 21 01 00 00 8b 9c 24 40 01 00 00 44 8b a4 24 30
RSP: 0018:ffffc90000156890 EFLAGS: 00000046
RAX: ffffffff925bbc28 RBX: 0000000000000000 RCX: 0000000000000001
RDX: 0000000000000004 RSI: 0000000000000000 RDI: ffffffff8d5ae940
RBP: 0000000000000000 R08: 0000000000000000 R09: ffffffff8d5ae940
R10: ffffc90000156ba8 R11: fffff5200002ad81 R12: 0000000000000002
R13: 0000000000000002 R14: 0000000000000000 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff888126cef000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fc6a4b2098b CR3: 000000008cf4e000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 lock_acquire+0x107/0x340 kernel/locking/lockdep.c:5868
 rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 rcu_read_lock include/linux/rcupdate.h:867 [inline]
 class_rcu_constructor include/linux/rcupdate.h:1195 [inline]
 unwind_next_frame+0xc2/0x23d0 arch/x86/kernel/unwind_orc.c:495
 __unwind_start+0x5b9/0x760 arch/x86/kernel/unwind_orc.c:773
 unwind_start arch/x86/include/asm/unwind.h:64 [inline]
 arch_stack_walk+0xe4/0x150 arch/x86/kernel/stacktrace.c:24
 stack_trace_save+0x9c/0xe0 kernel/stacktrace.c:122
 kasan_save_stack mm/kasan/common.c:57 [inline]
 kasan_save_track+0x3e/0x80 mm/kasan/common.c:78
 unpoison_slab_object mm/kasan/common.c:340 [inline]
 __kasan_slab_alloc+0x6c/0x80 mm/kasan/common.c:366
 kasan_slab_alloc include/linux/kasan.h:253 [inline]
 slab_post_alloc_hook mm/slub.c:4953 [inline]
 slab_alloc_node mm/slub.c:5263 [inline]
 kmem_cache_alloc_node_noprof+0x23c/0x6f0 mm/slub.c:5315
 __alloc_skb+0x1dc/0x3a0 net/core/skbuff.c:679
 alloc_skb include/linux/skbuff.h:1383 [inline]
 synproxy_send_client_synack+0x16c/0xe20 net/netfilter/nf_synproxy_core.c:460
 nft_synproxy_eval_v4+0x36e/0x560 net/netfilter/nft_synproxy.c:59
 nft_synproxy_do_eval+0x345/0x570 net/netfilter/nft_synproxy.c:141
 expr_call_ops_eval net/netfilter/nf_tables_core.c:237 [inline]
 nft_do_chain+0x40c/0x1920 net/netfilter/nf_tables_core.c:285
 nft_do_chain_inet+0x25d/0x340 net/netfilter/nft_chain_filter.c:161
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_slow+0xc5/0x220 net/netfilter/core.c:623
 nf_hook include/linux/netfilter.h:273 [inline]
 NF_HOOK+0x206/0x3a0 include/linux/netfilter.h:316
 NF_HOOK+0x30c/0x3a0 include/linux/netfilter.h:318
 __netif_receive_skb_one_core net/core/dev.c:6139 [inline]
 __netif_receive_skb+0x143/0x380 net/core/dev.c:6252
 process_backlog+0x315/0x8f0 net/core/dev.c:6604
 __napi_poll+0xae/0x520 net/core/dev.c:7668
 napi_poll net/core/dev.c:7731 [inline]
 net_rx_action+0x64a/0xdb0 net/core/dev.c:7883
 handle_softirqs+0x1df/0x650 kernel/softirq.c:622
 __do_softirq kernel/softirq.c:656 [inline]
 run_ktimerd+0x69/0x100 kernel/softirq.c:1138
 smpboot_thread_fn+0x542/0xa60 kernel/smpboot.c:160
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x510/0xa50 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>

Crashes (101):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/01/01 08:46 upstream 9528d5c091c5 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/11/19 14:20 upstream 8b690556d8fe 26ee5237 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/11/17 14:46 upstream 6a23ae0a96a6 ef766cd7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/11/12 11:52 upstream 24172e0d7990 4e1406b4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/10/24 15:16 upstream 6fab32bb6508 c0460fcd .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/10/21 20:41 upstream 6548d364a3e8 9832ed61 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/10/15 02:42 upstream 9b332cece987 b6605ba8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/30 05:07 upstream 1896ce8eb6c6 86341da6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/29 08:46 upstream e5f0a698b34e 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/28 17:35 upstream 51a24b7deaae 001c9061 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/26 00:26 upstream 4ff71af020ae 0abd0691 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/20 03:14 upstream cd89d487374c 67c37560 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/19 13:02 upstream 097a6c336d00 67c37560 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/17 19:48 upstream d4b779985a6c e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/17 00:54 upstream 5aca7966d2a7 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/16 04:13 upstream 46a51f4f5eda e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/11 20:51 upstream 02ffd6f89c50 e2beed91 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/08 19:44 upstream f777d1112ee5 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/05 23:52 upstream d1d10cea0895 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/04 05:51 upstream b9a10f876409 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/09/02 17:54 upstream b320789d6883 091ba174 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/08/31 04:14 upstream c8bc81a52d5a 807a3b61 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/08/30 00:57 upstream fb679c832b64 807a3b61 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/08/26 14:15 upstream fab1beda7597 bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/08/25 08:54 upstream c330cb607721 bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/08/23 22:28 upstream 8d245acc1e88 bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/08/23 17:00 upstream 8d245acc1e88 bf27483f .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in dev_map_free
2025/07/17 13:56 upstream e2291551827f 0d1223f1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in dev_map_free
2025/07/09 14:11 upstream 733923397fd9 f4e5e155 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in dev_map_free
2025/09/09 00:47 net e2a10daba849 d291dd2d .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in dev_map_free
2025/06/19 12:43 linux-next 2c923c845768 ed3e87f7 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in dev_map_free
2025/06/17 07:58 linux-next 4325743c7e20 cfebc887 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in dev_map_free
2025/08/22 05:00 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next 8f5ae30d69d7 bf27483f .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/08/22 04:59 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next 8f5ae30d69d7 bf27483f .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/08/21 17:32 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next 8f5ae30d69d7 3e79b825 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/08/20 16:09 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next 8f5ae30d69d7 bd178e57 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/08/20 00:45 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next 8f5ae30d69d7 79512909 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/08/12 19:01 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next 8f5ae30d69d7 22ec1469 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/08/12 00:15 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next 8f5ae30d69d7 c06e8995 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/08/05 13:08 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next fda589c28604 37880f40 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/08/02 04:36 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next fda589c28604 7368264b .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/08/01 16:37 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next fda589c28604 40127d41 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/07/30 05:20 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next fda589c28604 f8f2b4da .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/07/30 05:08 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next fda589c28604 f8f2b4da .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/07/30 05:07 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next fda589c28604 f8f2b4da .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/07/30 05:05 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next fda589c28604 f8f2b4da .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/07/30 05:05 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next fda589c28604 f8f2b4da .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
2025/07/24 16:53 git://git.kernel.org/pub/scm/linux/kernel/git/riscv/linux.git for-next fda589c28604 65d60d73 .config console log report info [disk image (non-bootable)] [vmlinux] [kernel image] ci-qemu2-riscv64 INFO: task hung in dev_map_free
* Struck through repros no longer work on HEAD.