syzbot


INFO: task hung in kvm_swap_active_memslots (2)

Status: upstream: reported on 2025/11/17 10:44
Subsystems: kvm
[Documentation on labels]
Reported-by: syzbot+5c566b850d6ab6f0427a@syzkaller.appspotmail.com
First crash: 106d, last: 15h36m
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly kvm report (Jan 2026) 0 (1) 2026/01/12 08:40
[syzbot] Monthly kvm report (Dec 2025) 0 (1) 2025/12/11 05:58
[syzbot] [kvm?] INFO: task hung in kvm_swap_active_memslots (2) 2 (3) 2025/11/17 16:54
Similar bugs (1)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in kvm_swap_active_memslots kvm 1 3 299d 357d 0/29 auto-obsoleted due to no activity on 2025/07/02 12:01

Sample crash report:
INFO: task syz.4.1500:13019 blocked for more than 143 seconds.
      Tainted: G     U  W    L XTNJ syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.4.1500      state:D stack:24840 pid:13019 tgid:13018 ppid:8472   task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5260 [inline]
 __schedule+0xfe4/0x5e10 kernel/sched/core.c:6867
 __schedule_loop kernel/sched/core.c:6949 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:6964
 kvm_swap_active_memslots+0x2e0/0x7c0 virt/kvm/kvm_main.c:1643
 kvm_activate_memslot virt/kvm/kvm_main.c:1802 [inline]
 kvm_create_memslot virt/kvm/kvm_main.c:1868 [inline]
 kvm_set_memslot+0xbde/0x1740 virt/kvm/kvm_main.c:1980
 kvm_set_memory_region+0xe1c/0x1570 virt/kvm/kvm_main.c:2136
 kvm_set_internal_memslot+0x9f/0xf0 virt/kvm/kvm_main.c:2159
 __x86_set_memory_region+0x2f6/0x730 arch/x86/kvm/x86.c:13294
 kvm_alloc_apic_access_page+0xc5/0x140 arch/x86/kvm/lapic.c:2806
 vmx_vcpu_create+0x79b/0xb90 arch/x86/kvm/vmx/vmx.c:7637
 kvm_arch_vcpu_create+0x683/0xac0 arch/x86/kvm/x86.c:12742
 kvm_vm_ioctl_create_vcpu virt/kvm/kvm_main.c:4223 [inline]
 kvm_vm_ioctl+0x756/0x4020 virt/kvm/kvm_main.c:5180
 vfs_ioctl fs/ioctl.c:51 [inline]
 __do_sys_ioctl fs/ioctl.c:597 [inline]
 __se_sys_ioctl fs/ioctl.c:583 [inline]
 __x64_sys_ioctl+0x18e/0x210 fs/ioctl.c:583
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0xc9/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f47e5f9aeb9
RSP: 002b:00007f47e6e84028 EFLAGS: 00000246 ORIG_RAX: 0000000000000010
RAX: ffffffffffffffda RBX: 00007f47e6215fa0 RCX: 00007f47e5f9aeb9
RDX: 0000000000000000 RSI: 000000000000ae41 RDI: 0000000000000003
RBP: 00007f47e6008c1f R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f47e6216038 R14: 00007f47e6215fa0 R15: 00007ffef43c8e98
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e5e3120 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 #0: ffffffff8e5e3120 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:867 [inline]
 #0: ffffffff8e5e3120 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
2 locks held by getty/10581:
 #0: ffff888031db70a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000f55d2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
2 locks held by syz.4.1500/13019:
 #0: ffff88806170c0a8 (&kvm->slots_lock){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffff88806170c0a8 (&kvm->slots_lock){+.+.}-{4:4}, at: kvm_alloc_apic_access_page+0x27/0x140 arch/x86/kvm/lapic.c:2800
 #1: ffff88806170c138 (&kvm->slots_arch_lock){+.+.}-{4:4}, at: kvm_set_memslot+0x34/0x1740 virt/kvm/kvm_main.c:1931
2 locks held by syz.2.2111/15786:
2 locks held by syz.1.2113/15801:
2 locks held by syz.2.2132/15892:
 #0: ffffffff903dd0b0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff8e5ef7c0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x48/0x6d0 kernel/rcu/tree.c:3816
1 lock held by syz.5.2137/15914:
 #0: ffffffff8e5ef7c0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x48/0x6d0 kernel/rcu/tree.c:3816
1 lock held by syz.3.2139/15920:
 #0: ffffffff8e5ef7c0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x48/0x6d0 kernel/rcu/tree.c:3816
1 lock held by syz.1.2138/15922:
 #0: ffffffff8e5ef7c0 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x48/0x6d0 kernel/rcu/tree.c:3816

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Tainted: G     U  W    L XTNJ syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [W]=WARN, [L]=SOFTLOCKUP, [X]=AUX, [T]=RANDSTRUCT, [N]=TEST, [J]=FWCTL
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/13/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xcc3/0xfe0 kernel/hung_task.c:515
 kthread+0x3b3/0x730 kernel/kthread.c:463
 ret_from_fork+0x754/0xaf0 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:246
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 15786 Comm: syz.2.2111 Tainted: G     U  W    L XTNJ syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [W]=WARN, [L]=SOFTLOCKUP, [X]=AUX, [T]=RANDSTRUCT, [N]=TEST, [J]=FWCTL
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/13/2026
RIP: 0010:task_irq_context kernel/locking/lockdep.c:4682 [inline]
RIP: 0010:__lock_acquire+0x151/0x2630 kernel/locking/lockdep.c:5174
Code: e2 ff 1f 48 01 c3 0f b7 43 20 4c 89 53 10 4c 89 63 18 66 25 00 e0 09 d0 31 d2 66 89 43 20 48 8b 84 24 b0 00 00 00 48 89 43 08 <65> 44 8b 1d 6f b2 01 12 8b b5 14 0b 00 00 45 85 db 0f 95 c2 31 c0
RSP: 0018:ffffc900045ef270 EFLAGS: 00000046
RAX: ffffffff81b6408d RBX: ffff88801fb0a9d8 RCX: 0000000000000001
RDX: 0000000000000000 RSI: ffff88801fb0a9b0 RDI: ffffffff8e5e3120
RBP: ffff88801fb09e80 R08: 0000000000000000 R09: 0000000000000000
R10: ffffffff8e5e3120 R11: 0000000000000001 R12: 0000000000000000
R13: 0000000000000007 R14: 0000000000000002 R15: 0000000000000000
FS:  0000000000000000(0000) GS:ffff8881245e3000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055558fb7d4e8 CR3: 0000000052044000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 lock_acquire kernel/locking/lockdep.c:5868 [inline]
 lock_acquire+0x17c/0x330 kernel/locking/lockdep.c:5825
 rcu_lock_acquire include/linux/rcupdate.h:331 [inline]
 rcu_read_lock include/linux/rcupdate.h:867 [inline]
 class_rcu_constructor include/linux/rcupdate.h:1195 [inline]
 unwind_next_frame+0xd1/0x1ea0 arch/x86/kernel/unwind_orc.c:495
 arch_stack_walk+0x94/0xf0 arch/x86/kernel/stacktrace.c:25
 stack_trace_save+0x8e/0xc0 kernel/stacktrace.c:122
 kasan_save_stack+0x30/0x50 mm/kasan/common.c:57
 kasan_record_aux_stack+0xa7/0xc0 mm/kasan/generic.c:556
 slab_free_hook mm/slub.c:2501 [inline]
 slab_free mm/slub.c:6674 [inline]
 kmem_cache_free+0x478/0x720 mm/slub.c:6785
 anon_vma_free mm/rmap.c:136 [inline]
 __put_anon_vma+0x114/0x3a0 mm/rmap.c:2777
 put_anon_vma include/linux/rmap.h:117 [inline]
 unlink_anon_vmas+0x578/0x810 mm/rmap.c:443
 free_pgtables+0x20b/0xbc0 mm/memory.c:414
 exit_mmap+0x3bd/0xae0 mm/mmap.c:1288
 __mmput+0x12a/0x410 kernel/fork.c:1173
 mmput+0x67/0x80 kernel/fork.c:1196
 exit_mm kernel/exit.c:581 [inline]
 do_exit+0x78a/0x2a30 kernel/exit.c:959
 do_group_exit+0xd5/0x2a0 kernel/exit.c:1112
 get_signal+0x1ec7/0x21e0 kernel/signal.c:3034
 arch_do_signal_or_restart+0x91/0x770 arch/x86/kernel/signal.c:337
 __exit_to_user_mode_loop kernel/entry/common.c:41 [inline]
 exit_to_user_mode_loop kernel/entry/common.c:75 [inline]
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 irqentry_exit_to_user_mode_prepare include/linux/irq-entry-common.h:270 [inline]
 irqentry_exit_to_user_mode include/linux/irq-entry-common.h:339 [inline]
 irqentry_exit+0x1f8/0x670 kernel/entry/common.c:196
 asm_exc_page_fault+0x26/0x30 arch/x86/include/asm/idtentry.h:618
RIP: 0033:0x1000
Code: Unable to access opcode bytes at 0xfd6.
RSP: 002b:000000000000000a EFLAGS: 00010282
RAX: 0000000000000002 RBX: 00007f926a015fa0 RCX: 00007f9269d9aeb9
RDX: 0000000000000000 RSI: 0000000000000002 RDI: 0000000020003b46
RBP: 00007f9269e08c1f R08: 0000000000000002 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007f926a016038 R14: 00007f926a015fa0 R15: 00007ffd4bfa8948
 </TASK>

Crashes (13):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/01/27 13:52 upstream fcb70a56f4d8 43e1df1d .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2026/01/07 04:47 upstream f0b9d8eb98df d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2026/01/03 04:08 upstream 9b0436804460 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/31 00:03 upstream dbf8fe85a16a d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/25 04:32 upstream ccd1cdca5cd4 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/15 14:33 upstream 8f0b4cce4481 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/13 15:18 upstream 9551a26f17d9 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/12/07 20:58 upstream 37bb2e7217b0 d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/11/29 01:25 upstream e538109ac71d d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/11/28 19:51 upstream e538109ac71d d1b870e1 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/11/19 13:20 upstream 8b690556d8fe 82d7b894 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/11/18 03:20 upstream e7c375b18160 ef766cd7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
2025/10/14 04:02 upstream 3a8660878839 b6605ba8 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in kvm_swap_active_memslots
* Struck through repros no longer work on HEAD.