syzbot


INFO: task hung in rtnl_dellink (3)

Status: auto-obsoleted due to no activity on 2026/01/31 10:00
Subsystems: net
[Documentation on labels]
First crash: 97d, last: 97d
Similar bugs (2)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in rtnl_dellink net 1 1 403d 403d 0/29 auto-obsoleted due to no activity on 2025/03/31 12:19
upstream INFO: task hung in rtnl_dellink (2) net 1 1 261d 261d 0/29 auto-obsoleted due to no activity on 2025/08/19 19:16

Sample crash report:
INFO: task syz.4.4743:22410 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.4743      state:D stack:25448 pid:22410 tgid:22408 ppid:16612  task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1798/0x4cc0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:7026
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7083
 __mutex_lock_common kernel/locking/mutex.c:676 [inline]
 __mutex_lock+0x7e6/0x1350 kernel/locking/mutex.c:760
 rtnl_lock net/core/rtnetlink.c:80 [inline]
 rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 rtnl_dellink+0x346/0x700 net/core/rtnetlink.c:3555
 rtnetlink_rcv_msg+0x7cf/0xb70 net/core/rtnetlink.c:6951
 netlink_rcv_skb+0x208/0x470 net/netlink/af_netlink.c:2552
 netlink_unicast_kernel net/netlink/af_netlink.c:1320 [inline]
 netlink_unicast+0x82f/0x9e0 net/netlink/af_netlink.c:1346
 netlink_sendmsg+0x805/0xb30 net/netlink/af_netlink.c:1896
 sock_sendmsg_nosec net/socket.c:727 [inline]
 __sock_sendmsg+0x21c/0x270 net/socket.c:742
 ____sys_sendmsg+0x505/0x830 net/socket.c:2630
 ___sys_sendmsg+0x21f/0x2a0 net/socket.c:2684
 __sys_sendmsg net/socket.c:2716 [inline]
 __do_sys_sendmsg net/socket.c:2721 [inline]
 __se_sys_sendmsg net/socket.c:2719 [inline]
 __x64_sys_sendmsg+0x19b/0x260 net/socket.c:2719
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f7720f8efc9
RSP: 002b:00007f7721db9038 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007f77211e6090 RCX: 00007f7720f8efc9
RDX: 0000000000000000 RSI: 00002000000003c0 RDI: 0000000000000086
RBP: 00007f7721011f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f77211e6128 R14: 00007f77211e6090 R15: 00007ffcfa6d5e38
 </TASK>
INFO: task syz.2.4745:22419 blocked for more than 144 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.2.4745      state:D stack:27368 pid:22419 tgid:22413 ppid:17272  task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1798/0x4cc0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:7026
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7083
 __mutex_lock_common kernel/locking/mutex.c:676 [inline]
 __mutex_lock+0x7e6/0x1350 kernel/locking/mutex.c:760
 rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 devinet_ioctl+0x323/0x1b50 net/ipv4/devinet.c:1120
 inet_ioctl+0x3c0/0x4c0 net/ipv4/af_inet.c:1003
 sock_do_ioctl+0xdc/0x300 net/socket.c:1254
 sock_ioctl+0x576/0x790 net/socket.c:1375
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:597 [inline]
 __se_sys_ioctl+0xfc/0x170 fs/ioctl.c:583
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb9eb78efc9
RSP: 002b:00007fb9ec618038 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007fb9eb9e6090 RCX: 00007fb9eb78efc9
RDX: 0000200000000180 RSI: 0000000000008914 RDI: 0000000000000006
RBP: 00007fb9eb811f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fb9eb9e6128 R14: 00007fb9eb9e6090 R15: 00007ffe15ebde38
 </TASK>
INFO: task syz.0.4748:22437 blocked for more than 145 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.4748      state:D stack:25096 pid:22437 tgid:22437 ppid:16299  task_flags:0x400040 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1798/0x4cc0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:7026
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7083
 __mutex_lock_common kernel/locking/mutex.c:676 [inline]
 __mutex_lock+0x7e6/0x1350 kernel/locking/mutex.c:760
 packet_release+0xfb/0xd00 net/packet/af_packet.c:3130
 __sock_release net/socket.c:662 [inline]
 sock_close+0xc3/0x240 net/socket.c:1455
 __fput+0x44c/0xa70 fs/file_table.c:468
 task_work_run+0x1d4/0x260 kernel/task_work.c:227
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 exit_to_user_mode_loop+0xe9/0x130 kernel/entry/common.c:43
 exit_to_user_mode_prepare include/linux/irq-entry-common.h:225 [inline]
 syscall_exit_to_user_mode_work include/linux/entry-common.h:175 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:210 [inline]
 do_syscall_64+0x2bd/0xfa0 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb0d098efc9
RSP: 002b:00007ffc613b38a8 EFLAGS: 00000246 ORIG_RAX: 00000000000001b4
RAX: 0000000000000000 RBX: 00007fb0d0be7da0 RCX: 00007fb0d098efc9
RDX: 0000000000000000 RSI: 000000000000001e RDI: 0000000000000003
RBP: 00007fb0d0be7da0 R08: 0000000000000050 R09: 00000005613b3b9f
R10: 00007fb0d0be7cb0 R11: 0000000000000246 R12: 00000000000810d6
R13: 00007fb0d0be6090 R14: ffffffffffffffff R15: 00007ffc613b39c0
 </TASK>
INFO: task syz.0.4748:22439 blocked for more than 145 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.0.4748      state:D stack:27208 pid:22439 tgid:22437 ppid:16299  task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1798/0x4cc0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:7026
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7083
 __mutex_lock_common kernel/locking/mutex.c:676 [inline]
 __mutex_lock+0x7e6/0x1350 kernel/locking/mutex.c:760
 pdiag_put_mclist net/packet/diag.c:47 [inline]
 sk_diag_fill net/packet/diag.c:160 [inline]
 packet_diag_dump+0xa51/0x1d60 net/packet/diag.c:207
 netlink_dump+0x6e4/0xe90 net/netlink/af_netlink.c:2327
 __netlink_dump_start+0x5cb/0x7e0 net/netlink/af_netlink.c:2442
 netlink_dump_start include/linux/netlink.h:341 [inline]
 packet_diag_handler_dump+0x1bc/0x270 net/packet/diag.c:242
 sock_diag_rcv_msg+0x4cc/0x600 net/core/sock_diag.c:-1
 netlink_rcv_skb+0x208/0x470 net/netlink/af_netlink.c:2552
 netlink_unicast_kernel net/netlink/af_netlink.c:1320 [inline]
 netlink_unicast+0x82f/0x9e0 net/netlink/af_netlink.c:1346
 netlink_sendmsg+0x805/0xb30 net/netlink/af_netlink.c:1896
 sock_sendmsg_nosec net/socket.c:727 [inline]
 __sock_sendmsg+0x21c/0x270 net/socket.c:742
 sock_write_iter+0x279/0x360 net/socket.c:1195
 new_sync_write fs/read_write.c:593 [inline]
 vfs_write+0x5c9/0xb30 fs/read_write.c:686
 ksys_write+0x145/0x250 fs/read_write.c:738
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb0d098efc9
RSP: 002b:00007fb0d1770038 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007fb0d0be5fa0 RCX: 00007fb0d098efc9
RDX: 0000000000000027 RSI: 0000200000005c00 RDI: 0000000000000004
RBP: 00007fb0d0a11f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fb0d0be6038 R14: 00007fb0d0be5fa0 R15: 00007ffc613b3748
 </TASK>
INFO: task syz.1.4754:22454 blocked for more than 146 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.1.4754      state:D stack:25160 pid:22454 tgid:22453 ppid:15400  task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5325 [inline]
 __schedule+0x1798/0x4cc0 kernel/sched/core.c:6929
 __schedule_loop kernel/sched/core.c:7011 [inline]
 schedule+0x165/0x360 kernel/sched/core.c:7026
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7083
 __mutex_lock_common kernel/locking/mutex.c:676 [inline]
 __mutex_lock+0x7e6/0x1350 kernel/locking/mutex.c:760
 rtnl_lock net/core/rtnetlink.c:80 [inline]
 rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 rtnl_newlink+0x8e9/0x1c80 net/core/rtnetlink.c:4064
 rtnetlink_rcv_msg+0x7cf/0xb70 net/core/rtnetlink.c:6951
 netlink_rcv_skb+0x208/0x470 net/netlink/af_netlink.c:2552
 netlink_unicast_kernel net/netlink/af_netlink.c:1320 [inline]
 netlink_unicast+0x82f/0x9e0 net/netlink/af_netlink.c:1346
 netlink_sendmsg+0x805/0xb30 net/netlink/af_netlink.c:1896
 sock_sendmsg_nosec net/socket.c:727 [inline]
 __sock_sendmsg+0x21c/0x270 net/socket.c:742
 ____sys_sendmsg+0x505/0x830 net/socket.c:2630
 ___sys_sendmsg+0x21f/0x2a0 net/socket.c:2684
 __sys_sendmsg net/socket.c:2716 [inline]
 __do_sys_sendmsg net/socket.c:2721 [inline]
 __se_sys_sendmsg net/socket.c:2719 [inline]
 __x64_sys_sendmsg+0x19b/0x260 net/socket.c:2719
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xfa/0xfa0 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fd54dd8efc9
RSP: 002b:00007fd54eca8038 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007fd54dfe5fa0 RCX: 00007fd54dd8efc9
RDX: 0000000000000000 RSI: 00002000000006c0 RDI: 0000000000000003
RBP: 00007fd54de11f91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007fd54dfe6038 R14: 00007fd54dfe5fa0 R15: 00007ffd06d0a008
 </TASK>

Showing all locks held in the system:
3 locks held by kworker/0:1/10:
 #0: ffff88801a056948 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff88801a056948 ((wq_completion)events_power_efficient){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc900000f7ba0 ((reg_check_chans).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc900000f7ba0 ((reg_check_chans).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
 #2: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: reg_check_chans_work+0xa1/0xf40 net/wireless/reg.c:2453
1 lock held by kworker/R-mm_pe/14:
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3556
1 lock held by khungtaskd/31:
 #0: ffffffff8df3d2e0 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8df3d2e0 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8df3d2e0 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
3 locks held by kworker/u9:0/52:
 #0: ffff88807d261148 ((wq_completion)hci15){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff88807d261148 ((wq_completion)hci15){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc90000bd7ba0 ((work_completion)(&hdev->power_on)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc90000bd7ba0 ((work_completion)(&hdev->power_on)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
 #2: ffff88804bc88dc8 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_open net/bluetooth/hci_core.c:428 [inline]
 #2: ffff88804bc88dc8 (&hdev->req_lock){+.+.}-{4:4}, at: hci_power_on+0x1ac/0x680 net/bluetooth/hci_core.c:959
8 locks held by kworker/1:1H/97:
3 locks held by kworker/u8:6/3068:
 #0: ffff88801aedf148 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff88801aedf148 ((wq_completion)netns){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc9000b7b7ba0 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc9000b7b7ba0 (net_cleanup_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
 #2: ffffffff8f2be330 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xf7/0x820 net/core/net_namespace.c:669
1 lock held by kworker/R-krxrp/3391:
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x2e/0x3a0 kernel/workqueue.c:2678
2 locks held by getty/5584:
 #0: ffff88802f77e0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x43e/0x1400 drivers/tty/n_tty.c:2222
1 lock held by kworker/u9:4/5833:
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x2e/0x3a0 kernel/workqueue.c:2678
3 locks held by kworker/u9:6/5837:
 #0: ffff8880645c9148 ((wq_completion)hci16){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff8880645c9148 ((wq_completion)hci16){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc9000422fba0 ((work_completion)(&hdev->power_on)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc9000422fba0 ((work_completion)(&hdev->power_on)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
 #2: ffff88807ae60dc8 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_open net/bluetooth/hci_core.c:428 [inline]
 #2: ffff88807ae60dc8 (&hdev->req_lock){+.+.}-{4:4}, at: hci_power_on+0x1ac/0x680 net/bluetooth/hci_core.c:959
1 lock held by kworker/R-bond0/5858:
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: set_pf_worker kernel/workqueue.c:3352 [inline]
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0xd01/0xdd0 kernel/workqueue.c:3571
3 locks held by kworker/0:3/5885:
 #0: ffff88801a055948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff88801a055948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc9000458fba0 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc9000458fba0 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
 #2: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
5 locks held by kworker/1:5/5892:
3 locks held by kworker/u8:14/6159:
 #0: ffff88801a069948 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff88801a069948 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc9000bd37ba0 ((work_completion)(&pool->idle_cull_work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc9000bd37ba0 ((work_completion)(&pool->idle_cull_work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
 #2: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: idle_cull_fn+0xca/0x730 kernel/workqueue.c:2960
3 locks held by kworker/u8:26/8673:
 #0: ffff88801a069948 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff88801a069948 ((wq_completion)events_unbound#2){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc90003017ba0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc90003017ba0 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
 #2: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:303
3 locks held by kworker/u8:30/8677:
 #0: ffff88802f002948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff88802f002948 ((wq_completion)ipv6_addrconf){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc90003517ba0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc90003517ba0 ((work_completion)(&(&net->ipv6.addr_chk_work)->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
 #2: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #2: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: addrconf_verify_work+0x19/0x30 net/ipv6/addrconf.c:4734
1 lock held by kworker/1:8/13859:
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x2e/0x3a0 kernel/workqueue.c:2678
3 locks held by kworker/0:8/13861:
 #0: ffff88801a055948 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3238 [inline]
 #0: ffff88801a055948 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0x9b4/0x17b0 kernel/workqueue.c:3346
 #1: ffffc9001b68fba0 (free_ipc_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3239 [inline]
 #1: ffffc9001b68fba0 (free_ipc_work){+.+.}-{0:0}, at: process_scheduled_works+0x9ef/0x17b0 kernel/workqueue.c:3346
 #2: ffffffff8df42d78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:311 [inline]
 #2: ffffffff8df42d78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x2f6/0x730 kernel/rcu/tree_exp.h:957
4 locks held by kworker/1:9/13871:
1 lock held by kworker/R-wg-cr/15475:
1 lock held by kworker/R-wg-cr/15478:
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3556
1 lock held by kworker/R-wg-cr/16350:
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3556
1 lock held by kworker/R-wg-cr/16351:
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3556
1 lock held by kworker/R-wg-cr/17031:
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3556
1 lock held by kworker/R-wg-cr/17032:
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x2e/0x3a0 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/17034:
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_attach_to_pool+0x2e/0x3a0 kernel/workqueue.c:2678
1 lock held by kworker/R-wg-cr/17316:
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: worker_detach_from_pool kernel/workqueue.c:2736 [inline]
 #0: ffffffff8dde3be8 (wq_pool_attach_mutex){+.+.}-{4:4}, at: rescuer_thread+0x88b/0xdd0 kernel/workqueue.c:3556
1 lock held by khidpd_04580059/20886:
 #0: ffff88807bf4c0b8 (&hdev->lock){+.+.}-{4:4}, at: l2cap_unregister_user+0x6a/0x1b0 net/bluetooth/l2cap_core.c:1728
5 locks held by syz.3.4741/22407:
 #0: ffff88807bf4cdc8 (&hdev->req_lock){+.+.}-{4:4}, at: hci_dev_do_close net/bluetooth/hci_core.c:499 [inline]
 #0: ffff88807bf4cdc8 (&hdev->req_lock){+.+.}-{4:4}, at: hci_unregister_dev+0x212/0x510 net/bluetooth/hci_core.c:2715
 #1: ffff88807bf4c0b8 (&hdev->lock){+.+.}-{4:4}, at: hci_dev_close_sync+0x66a/0x1330 net/bluetooth/hci_sync.c:5296
 #2: ffffffff8f4354e8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_disconn_cfm include/net/bluetooth/hci_core.h:2119 [inline]
 #2: ffffffff8f4354e8 (hci_cb_list_lock){+.+.}-{4:4}, at: hci_conn_hash_flush+0xa1/0x230 net/bluetooth/hci_conn.c:2609
 #3: ffff88804c21c338 (&conn->lock#2){+.+.}-{4:4}, at: l2cap_conn_del+0x70/0x680 net/bluetooth/l2cap_core.c:1762
 #4: ffffffff8df42d78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:343 [inline]
 #4: ffffffff8df42d78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x3b9/0x730 kernel/rcu/tree_exp.h:957
1 lock held by syz.4.4743/22410:
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_dellink+0x346/0x700 net/core/rtnetlink.c:3555
1 lock held by syz.2.4745/22419:
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: devinet_ioctl+0x323/0x1b50 net/ipv4/devinet.c:1120
2 locks held by syz.0.4748/22437:
 #0: ffff88807fb0d848 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: inode_lock include/linux/fs.h:980 [inline]
 #0: ffff88807fb0d848 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: __sock_release net/socket.c:661 [inline]
 #0: ffff88807fb0d848 (&sb->s_type->i_mutex_key#11){+.+.}-{4:4}, at: sock_close+0x9b/0x240 net/socket.c:1455
 #1: ffff88807d2f84d0 (&net->packet.sklist_lock){+.+.}-{4:4}, at: packet_release+0xfb/0xd00 net/packet/af_packet.c:3130
3 locks held by syz.0.4748/22439:
 #0: ffff8880283486e8 (nlk_cb_mutex-SOCK_DIAG){+.+.}-{4:4}, at: __netlink_dump_start+0xfe/0x7e0 net/netlink/af_netlink.c:2406
 #1: ffff88807d2f84d0 (&net->packet.sklist_lock){+.+.}-{4:4}, at: packet_diag_dump+0x1c9/0x1d60 net/packet/diag.c:200
 #2: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: pdiag_put_mclist net/packet/diag.c:47 [inline]
 #2: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: sk_diag_fill net/packet/diag.c:160 [inline]
 #2: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: packet_diag_dump+0xa51/0x1d60 net/packet/diag.c:207
2 locks held by syz.1.4754/22454:
 #0: ffffffff8ea368a0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8ea368a0 (&ops->srcu#2){.+.+}-{0:0}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8ea368a0 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x23/0x250 net/core/rtnetlink.c:570
 #1: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8e9/0x1c80 net/core/rtnetlink.c:4064
1 lock held by syz-executor/22459:
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978
1 lock held by syz-executor/22461:
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978
1 lock held by syz-executor/22464:
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978
1 lock held by syz-executor/22466:
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978
1 lock held by syz-executor/22469:
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978
1 lock held by syz-executor/22475:
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978
1 lock held by syz-executor/22477:
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978
1 lock held by syz-executor/22479:
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978
1 lock held by syz-executor/22483:
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978
1 lock held by syz-executor/22485:
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f2cb1c8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x3b0/0x18b0 net/ipv4/devinet.c:978

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Call Trace:
 <TASK>
 dump_stack_lvl+0x189/0x250 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x39e/0x3d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:160 [inline]
 check_hung_uninterruptible_tasks kernel/hung_task.c:332 [inline]
 watchdog+0xf60/0xfa0 kernel/hung_task.c:495
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 0 to CPUs 1:
NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 15475 Comm: kworker/R-wg-cr Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 10/02/2025
Workqueue: wg-crypt-wg0 wg_packet_encrypt_worker
RIP: 0010:check_preemption_disabled+0x17/0x120 lib/smp_processor_id.c:14
Code: 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 55 41 57 41 56 53 48 83 ec 10 65 48 8b 05 de c4 27 07 48 89 44 24 08 <65> 8b 05 e6 c4 27 07 65 8b 0d db c4 27 07 f7 c1 ff ff ff 7f 74 23
RSP: 0018:ffffc90000a06eb8 EFLAGS: 00000086
RAX: beb5ebae4def5f00 RBX: 0000000000000206 RCX: beb5ebae4def5f00
RDX: ffffc90000a07068 RSI: ffffffff8d72173e RDI: ffffffff8bbf05e0
RBP: dffffc0000000000 R08: ffffc9001b56f410 R09: 0000000000000000
R10: ffffc90000a07078 R11: fffff52000140e11 R12: ffffc9001b56f410
R13: ffffffff81738d25 R14: ffffffff8df3d2e0 R15: ffff88804f603c80
FS:  0000000000000000(0000) GS:ffff88812623e000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007f9f78de7dac CR3: 000000000dd38000 CR4: 00000000003526f0
Call Trace:
 <IRQ>
 lockdep_recursion_inc kernel/locking/lockdep.c:465 [inline]
 lock_release+0xbc/0x3e0 kernel/locking/lockdep.c:5888
 rcu_lock_release include/linux/rcupdate.h:341 [inline]
 rcu_read_unlock include/linux/rcupdate.h:897 [inline]
 class_rcu_destructor include/linux/rcupdate.h:1195 [inline]
 unwind_next_frame+0x19a9/0x2390 arch/x86/kernel/unwind_orc.c:680
 arch_stack_walk+0x11c/0x150 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x9c/0xe0 kernel/stacktrace.c:122
 kasan_save_stack mm/kasan/common.c:56 [inline]
 kasan_save_track+0x3e/0x80 mm/kasan/common.c:77
 __kasan_save_free_info+0x46/0x50 mm/kasan/generic.c:587
 kasan_save_free_info mm/kasan/kasan.h:406 [inline]
 poison_slab_object mm/kasan/common.c:252 [inline]
 __kasan_slab_free+0x5c/0x80 mm/kasan/common.c:284
 kasan_slab_free include/linux/kasan.h:234 [inline]
 slab_free_hook mm/slub.c:2539 [inline]
 slab_free mm/slub.c:6630 [inline]
 kmem_cache_free+0x19b/0x690 mm/slub.c:6740
 packet_rcv_spkt+0x446/0x5c0 net/packet/af_packet.c:-1
 deliver_skb net/core/dev.c:2472 [inline]
 dev_queue_xmit_nit+0x3f4/0xcc0 net/core/dev.c:2548
 xmit_one net/core/dev.c:3841 [inline]
 dev_hard_start_xmit+0x1be/0x830 net/core/dev.c:3861
 __dev_queue_xmit+0x1b8d/0x3b50 net/core/dev.c:4763
 dev_queue_xmit include/linux/netdevice.h:3365 [inline]
 br_dev_queue_push_xmit+0x6c5/0x890 net/bridge/br_forward.c:53
 NF_HOOK+0x61b/0x6b0 include/linux/netfilter.h:318
 br_nf_post_routing+0xb66/0xfe0 net/bridge/br_netfilter_hooks.c:966
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_slow+0xc5/0x220 net/netfilter/core.c:623
 nf_hook include/linux/netfilter.h:273 [inline]
 NF_HOOK+0x215/0x3c0 include/linux/netfilter.h:316
 br_forward_finish+0xd3/0x130 net/bridge/br_forward.c:66
 br_nf_hook_thresh net/bridge/br_netfilter_hooks.c:-1 [inline]
 br_nf_forward_finish+0xa40/0xe60 net/bridge/br_netfilter_hooks.c:662
 NF_HOOK+0x61b/0x6b0 include/linux/netfilter.h:318
 br_nf_forward_ip+0x647/0x7e0 net/bridge/br_netfilter_hooks.c:716
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_slow+0xc5/0x220 net/netfilter/core.c:623
 nf_hook include/linux/netfilter.h:273 [inline]
 NF_HOOK+0x215/0x3c0 include/linux/netfilter.h:316
 __br_forward+0x41e/0x600 net/bridge/br_forward.c:115
 br_handle_frame_finish+0x15a3/0x1c50 net/bridge/br_input.c:229
 br_nf_hook_thresh+0x3c6/0x4a0 net/bridge/br_netfilter_hooks.c:-1
 br_nf_pre_routing_finish_ipv6+0x999/0xd60 net/bridge/br_netfilter_ipv6.c:-1
 NF_HOOK include/linux/netfilter.h:318 [inline]
 br_nf_pre_routing_ipv6+0x37e/0x6b0 net/bridge/br_netfilter_ipv6.c:184
 nf_hook_entry_hookfn include/linux/netfilter.h:158 [inline]
 nf_hook_bridge_pre net/bridge/br_input.c:291 [inline]
 br_handle_frame+0x982/0x14c0 net/bridge/br_input.c:442
 __netif_receive_skb_core+0x10b9/0x4380 net/core/dev.c:5966
 __netif_receive_skb_one_core net/core/dev.c:6077 [inline]
 __netif_receive_skb+0x72/0x380 net/core/dev.c:6192
 process_backlog+0x60e/0x14f0 net/core/dev.c:6544
 __napi_poll+0xc7/0x360 net/core/dev.c:7594
 napi_poll net/core/dev.c:7657 [inline]
 net_rx_action+0x5f7/0xdf0 net/core/dev.c:7784
 handle_softirqs+0x286/0x870 kernel/softirq.c:622
 do_softirq+0xec/0x180 kernel/softirq.c:523
 </IRQ>
 <TASK>
 __local_bh_enable_ip+0x17d/0x1c0 kernel/softirq.c:450
 spin_unlock_bh include/linux/spinlock.h:396 [inline]
 ptr_ring_consume_bh include/linux/ptr_ring.h:377 [inline]
 wg_packet_encrypt_worker+0x2cf/0x1700 drivers/net/wireguard/send.c:293
 process_one_work kernel/workqueue.c:3263 [inline]
 process_scheduled_works+0xae1/0x17b0 kernel/workqueue.c:3346
 rescuer_thread+0x53c/0xdd0 kernel/workqueue.c:3523
 kthread+0x711/0x8a0 kernel/kthread.c:463
 ret_from_fork+0x4bc/0x870 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (1):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2025/11/02 09:54 net d7d2fcf7ae31 2c50b6a9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-net-this-kasan-gce INFO: task hung in rtnl_dellink
* Struck through repros no longer work on HEAD.