syzbot


INFO: task hung in nfsd_umount

Status: upstream: reported on 2024/07/07 04:37
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b568ba42c85a332a88ee@syzkaller.appspotmail.com
First crash: 688d, last: 3h40m
✨ AI Jobs (1)
ID Workflow Result Correct Bug Created Started Finished Revision Error
a607d1e4-f56a-479f-bd5d-819025c7ef3e repro INFO: task hung in nfsd_umount 2026/03/07 03:10 2026/03/07 03:11 2026/03/07 03:20 31e9c887f7dc24e04b3ca70d0d54fc34141844b0
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly nfs report (Jul 2025) 0 (1) 2025/07/04 12:38
[syzbot] Monthly nfs report (Jun 2025) 0 (1) 2025/06/03 09:38
[syzbot] [nfs?] INFO: task hung in nfsd_umount 3 (4) 2024/09/21 07:58

Sample crash report:
INFO: task syz-executor:6821 blocked for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:23880 pid:6821  tgid:6821  ppid:1      task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7008
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7065
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
 nfsd_umount+0x3b/0x60 fs/nfsd/nfsctl.c:1364
 deactivate_locked_super+0xc1/0x1b0 fs/super.c:476
 deactivate_super fs/super.c:509 [inline]
 deactivate_super+0xe7/0x110 fs/super.c:505
 cleanup_mnt+0x21f/0x450 fs/namespace.c:1312
 task_work_run+0x150/0x240 kernel/task_work.c:233
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 __exit_to_user_mode_loop kernel/entry/common.c:67 [inline]
 exit_to_user_mode_loop+0x100/0x4a0 kernel/entry/common.c:98
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline]
 do_syscall_64+0x668/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7fb92599da57
RSP: 002b:00007ffce6569888 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 0000000000000000 RCX: 00007fb92599da57
RDX: 0000000000000000 RSI: 0000000000000009 RDI: 00007ffce6569940
RBP: 00007ffce6569940 R08: 00007ffce656a940 R09: 00000000ffffffff
R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffce656a9d0
R13: 00007fb925a32048 R14: 0000000000029024 R15: 00007ffce656aa10
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
4 locks held by kworker/u8:10/1166:
 #0: ffff88801c6b6948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc90004b6fd08 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff905fe850 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xb8/0x920 net/core/net_namespace.c:675
 #3: ffffffff8e7f3180 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x48/0x6d0 kernel/rcu/tree.c:3828
2 locks held by getty/5589:
 #0: ffff8880356b30a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc9000332b2f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
2 locks held by sshd-session/5810:
2 locks held by syz-executor/5825:
 #0: ffff888058db00e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888058db00e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff888058db00e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff888058db00e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
3 locks held by kworker/1:6/5916:
 #0: ffff88813fe63148 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc90004247d08 (deferred_process_work){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
2 locks held by syz.3.151/6660:
 #0: ffffffff906c33f0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x6c1/0xc00 fs/nfsd/nfsctl.c:1607
2 locks held by syz-executor/6821:
 #0: ffff88805ab160e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88805ab160e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88805ab160e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88805ab160e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz-executor/6894:
 #0: ffff8880294d80e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880294d80e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff8880294d80e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff8880294d80e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz-executor/7138:
 #0: ffff88802d5400e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88802d5400e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88802d5400e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88802d5400e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz-executor/7182:
 #0: ffff88802a47e0e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88802a47e0e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88802a47e0e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88802a47e0e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz-executor/7405:
 #0: ffff88806630c0e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88806630c0e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88806630c0e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88806630c0e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz-executor/7463:
 #0: ffff8880654900e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880654900e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff8880654900e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff8880654900e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz-executor/7639:
 #0: ffff8880362840e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880362840e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff8880362840e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff8880362840e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz-executor/7741:
 #0: ffff88803664e0e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88803664e0e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88803664e0e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88803664e0e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz.0.309/7835:
 #0: ffffffff906c33f0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x6c1/0xc00 fs/nfsd/nfsctl.c:1607
3 locks held by kworker/u8:29/8008:
 #0: ffff88813fea4148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc900052bfd08 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0x51/0xc0 net/core/link_watch.c:313
2 locks held by syz-executor/8187:
 #0: ffff88809253a0e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88809253a0e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88809253a0e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88809253a0e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz-executor/8567:
 #0: ffff888059ec80e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888059ec80e0 (&type->s_umount_key#53){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff888059ec80e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff888059ec80e0 (&type->s_umount_key#53){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz-executor/9334:
 #0: ffffffff8f9872a8 (&ops->srcu#2){.+.+}-{0:0}, at: srcu_lock_acquire include/linux/srcu.h:187 [inline]
 #0: ffffffff8f9872a8 (&ops->srcu#2){.+.+}-{0:0}, at: srcu_read_lock include/linux/srcu.h:294 [inline]
 #0: ffffffff8f9872a8 (&ops->srcu#2){.+.+}-{0:0}, at: rtnl_link_ops_get+0x113/0x2c0 net/core/rtnetlink.c:574
 #1: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_lock net/core/rtnetlink.c:80 [inline]
 #1: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_nets_lock net/core/rtnetlink.c:341 [inline]
 #1: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_newlink+0x8bb/0x2380 net/core/rtnetlink.c:4093
2 locks held by syz.4.556/9349:
 #0: ffffffff905fe850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: cfg802154_pernet_exit+0x17/0xe0 net/ieee802154/core.c:351
2 locks held by syz.7.557/9352:
 #0: ffffffff905fe850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: caif_exit_net+0x60/0x3a0 net/caif/caif_dev.c:528
3 locks held by syz.7.557/9354:
 #0: ffffffff905fe850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: ops_exit_rtnl_list net/core/net_namespace.c:173 [inline]
 #1: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: ops_undo_list+0x7ec/0xab0 net/core/net_namespace.c:248
 #2: ffffffff8e7f32b8 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x27f/0x3c0 kernel/rcu/tree_exp.h:311
2 locks held by syz.6.559/9369:
 #0: ffffffff905fe850 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #1: ffffffff906170a8 (rtnl_mutex){+.+.}-{4:4}, at: register_netdevice_notifier_net+0x23/0xb0 net/core/dev.c:2102

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 1303 Comm: aoe_tx0 Not tainted syzkaller #0 PREEMPT(full) 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
RIP: 0010:desc_read+0x2a5/0x380 kernel/printk/printk_ringbuffer.c:551
Code: ff ff ff ff 3f 48 21 eb 48 89 de e8 85 78 20 00 49 39 dc 0f 85 88 00 00 00 e8 57 7e 20 00 48 89 e8 48 c1 e8 3e 48 89 44 24 08 <e8> 46 7e 20 00 4d 85 ed 74 2d e8 3c 7e 20 00 be 08 00 00 00 4c 89
RSP: 0018:ffffc9000506ef10 EFLAGS: 00000a02
RAX: 0000000000000002 RBX: 00000000fffff28a RCX: ffffffff81e7c4db
RDX: ffff888029f45b80 RSI: ffffffff81e7c4e9 RDI: ffff888029f45b80
RBP: 80000000fffff28a R08: 0000000000000006 R09: 00000000fffff28a
R10: 00000000fffff28a R11: 000000000000e2d8 R12: 00000000fffff28a
R13: ffffc9000506f0c8 R14: 0000000000000000 R15: ffffffff8e759950
FS:  0000000000000000(0000) GS:ffff888124340000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 000055a5bcae9ee8 CR3: 0000000076fce000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 desc_read_finalized_seq+0x89/0x1d0 kernel/printk/printk_ringbuffer.c:1931
 prb_read kernel/printk/printk_ringbuffer.c:1999 [inline]
 _prb_read_valid+0x4aa/0x880 kernel/printk/printk_ringbuffer.c:2170
 prb_read_valid+0x78/0xa0 kernel/printk/printk_ringbuffer.c:2242
 printk_get_next_message+0x15b/0x6c0 kernel/printk/printk.c:3052
 console_emit_next_record kernel/printk/printk.c:3137 [inline]
 console_flush_one_record+0x67c/0xe50 kernel/printk/printk.c:3269
 console_flush_all kernel/printk/printk.c:3343 [inline]
 __console_flush_and_unlock kernel/printk/printk.c:3373 [inline]
 console_unlock+0x103/0x260 kernel/printk/printk.c:3413
 vprintk_emit+0x407/0x6b0 kernel/printk/printk.c:2479
 dev_vprintk_emit+0x394/0x3e0 drivers/base/core.c:4915
 dev_printk_emit+0xd2/0x10d drivers/base/core.c:4926
 __netdev_printk+0x1d1/0x290 net/core/dev.c:12951
 netdev_warn+0xef/0x127 net/core/dev.c:13004
 ieee802154_subif_start_xmit.cold+0x17/0x2b net/mac802154/tx.c:232
 __netdev_start_xmit include/linux/netdevice.h:5325 [inline]
 netdev_start_xmit include/linux/netdevice.h:5334 [inline]
 xmit_one net/core/dev.c:3888 [inline]
 dev_hard_start_xmit+0x121/0x7d0 net/core/dev.c:3904
 sch_direct_xmit+0x1b2/0xc60 net/sched/sch_generic.c:347
 __dev_xmit_skb net/core/dev.c:4203 [inline]
 __dev_queue_xmit+0x2404/0x4800 net/core/dev.c:4819
 dev_queue_xmit include/linux/netdevice.h:3385 [inline]
 tx+0xc4/0x130 drivers/block/aoe/aoenet.c:62
 kthread+0x1d8/0x3c0 drivers/block/aoe/aoecmd.c:1241
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (3959):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/04/07 17:58 upstream bfe62a454542 628666c6 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/07 09:41 upstream bfe62a454542 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2026/04/07 06:55 upstream bfe62a454542 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/07 02:50 upstream bfe62a454542 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/07 00:53 upstream bfe62a454542 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 14:16 upstream 591cd656a1bf 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 08:28 upstream 591cd656a1bf 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 06:01 upstream 1791c390149f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 03:39 upstream 1791c390149f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 01:55 upstream 1791c390149f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/05 14:35 upstream 3aae9383f42f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/04 20:15 upstream 7ca6d1cfec80 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/04 16:38 upstream 7ca6d1cfec80 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2026/04/04 14:04 upstream 7ca6d1cfec80 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/04 13:00 upstream 631919fb12fe 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/04 05:34 upstream 631919fb12fe 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 18:05 upstream d8a9a4b11a13 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 16:37 upstream d8a9a4b11a13 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 15:25 upstream d8a9a4b11a13 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 10:14 upstream 5619b098e2fb 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 08:21 upstream 5619b098e2fb 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 02:17 upstream 5619b098e2fb 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 16:31 upstream 9147566d8016 91bc79b0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 14:18 upstream 9147566d8016 8b15d4ae .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/04/02 12:53 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2026/04/02 08:39 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 06:57 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 04:59 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 04:12 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 02:22 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/01 21:19 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/01 04:40 upstream d0c3bcd5b897 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/28 12:52 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/28 10:31 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/28 06:13 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/28 02:23 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 22:42 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 10:27 upstream 46b513250491 4b3d9a38 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 08:31 upstream 46b513250491 4b3d9a38 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 00:49 upstream 0138af2472df 4b3d9a38 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 09:38 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 07:53 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 05:10 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 00:57 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 23:17 upstream bbeb83d3182a 4367a094 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 22:08 upstream bbeb83d3182a 4367a094 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 19:22 upstream bbeb83d3182a 4367a094 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 10:40 upstream 24f9515de877 b4723e5f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 09:32 upstream 24f9515de877 b4723e5f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 07:30 upstream 24f9515de877 b4723e5f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/24 23:24 upstream e3c33bc767b5 74e70d19 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/24 21:52 upstream e3c33bc767b5 74e70d19 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/14 17:38 upstream 1c9982b49613 ee8d34d6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2026/03/06 23:37 upstream 651690480a96 5cb44a80 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/07/06 12:12 upstream 1dd28064d416 bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/07/03 04:33 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/06/29 05:25 upstream 6c0483dbfe72 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/03/29 20:31 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-rust-kasan-gce INFO: task hung in nfsd_umount
2026/03/29 05:26 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-rust-kasan-gce INFO: task hung in nfsd_umount
2026/03/28 00:20 linux-next e77a5a5cfe43 74a13a23 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
2026/03/24 14:27 linux-next 09c0f7f1bcdb 74e70d19 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
* Struck through repros no longer work on HEAD.