syzbot


INFO: task hung in nfsd_umount

Status: upstream: reported on 2024/07/07 04:37
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b568ba42c85a332a88ee@syzkaller.appspotmail.com
First crash: 635d, last: 1h44m
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly nfs report (Jul 2025) 0 (1) 2025/07/04 12:38
[syzbot] Monthly nfs report (Jun 2025) 0 (1) 2025/06/03 09:38
[syzbot] [nfs?] INFO: task hung in nfsd_umount 3 (4) 2024/09/21 07:58

Sample crash report:
INFO: task syz-executor:5820 blocked for more than 143 seconds.
      Tainted: G             L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:23848 pid:5820  tgid:5820  ppid:1      task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x6020 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7004
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7061
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
 nfsd_umount+0x3b/0x60 fs/nfsd/nfsctl.c:1354
 deactivate_locked_super+0xc1/0x1b0 fs/super.c:476
 deactivate_super fs/super.c:509 [inline]
 deactivate_super+0xe7/0x110 fs/super.c:505
 cleanup_mnt+0x21f/0x450 fs/namespace.c:1312
 task_work_run+0x150/0x240 kernel/task_work.c:233
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 __exit_to_user_mode_loop kernel/entry/common.c:67 [inline]
 exit_to_user_mode_loop+0x100/0x4a0 kernel/entry/common.c:98
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline]
 do_syscall_64+0x668/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f785419d1d7
RSP: 002b:00007ffd74f638a8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 00007f7854231c3b RCX: 00007f785419d1d7
RDX: 0000000000000004 RSI: 0000000000000009 RDI: 00007ffd74f649f0
RBP: 00007ffd74f649dc R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffd74f649f0
R13: 00007f7854231c3b R14: 0000000000030648 R15: 00007ffd74f64a30
 </TASK>
INFO: task syz-executor:5821 blocked for more than 143 seconds.
      Tainted: G             L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:24184 pid:5821  tgid:5821  ppid:1      task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5295 [inline]
 __schedule+0xfee/0x6020 kernel/sched/core.c:6907
 __schedule_loop kernel/sched/core.c:6989 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7004
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7061
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
 nfsd_umount+0x3b/0x60 fs/nfsd/nfsctl.c:1354
 deactivate_locked_super+0xc1/0x1b0 fs/super.c:476
 deactivate_super fs/super.c:509 [inline]
 deactivate_super+0xe7/0x110 fs/super.c:505
 cleanup_mnt+0x21f/0x450 fs/namespace.c:1312
 task_work_run+0x150/0x240 kernel/task_work.c:233
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 __exit_to_user_mode_loop kernel/entry/common.c:67 [inline]
 exit_to_user_mode_loop+0x100/0x4a0 kernel/entry/common.c:98
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline]
 do_syscall_64+0x668/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f43c479d1d7
RSP: 002b:00007fff794ae6b8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 00007f43c4831c3b RCX: 00007f43c479d1d7
RDX: 0000000000000004 RSI: 0000000000000009 RDI: 00007fff794af800
RBP: 00007fff794af7ec R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 00007fff794af800
R13: 00007f43c4831c3b R14: 000000000002f322 R15: 00007fff794af840
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/31:
 #0: ffffffff8e7e7460 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e7460 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e7460 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
2 locks held by syz-executor/5820:
 #0: ffff8880217cc0e0 (&type->s_umount_key#54){+.+.}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff8880217cc0e0 (&type->s_umount_key#54){+.+.}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff8880217cc0e0 (&type->s_umount_key#54){+.+.}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff8880217cc0e0 (&type->s_umount_key#54){+.+.}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec53188 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
2 locks held by syz-executor/5821:
 #0: ffff88807b2140e0 (&type->s_umount_key#54){+.+.}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88807b2140e0 (&type->s_umount_key#54){+.+.}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88807b2140e0 (&type->s_umount_key#54){+.+.}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88807b2140e0 (&type->s_umount_key#54){+.+.}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec53188 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:575
4 locks held by kworker/u8:21/7865:
 #0: ffff88801c6a6948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x11ae/0x1840 kernel/workqueue.c:3250
 #1: ffffc9000475fd08 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x927/0x1840 kernel/workqueue.c:3251
 #2: ffffffff905f10f0 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xb8/0x920 net/core/net_namespace.c:675
 #3: ffffffff8e7f2f40 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x48/0x6d0 kernel/rcu/tree.c:3828
2 locks held by syz.0.437/7909:
 #0: ffffffff906b5b30 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8ec53188 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x718/0xc60 fs/nfsd/nfsctl.c:1597
2 locks held by getty/8621:
 #0: ffff888035c2b0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc900049232f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
2 locks held by syz.6.599/8767:
 #0: ffffffff906b5b30 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffffffff8ec53188 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x718/0xc60 fs/nfsd/nfsctl.c:1597
2 locks held by syz.1.838/9884:
 #0: ffffffff906b5b30 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffff88804dfc86f0 (nlk_cb_mutex-GENERIC){+.+.}-{4:4}, at: __netlink_dump_start+0x150/0x990 net/netlink/af_netlink.c:2404
3 locks held by syz.1.838/9888:
 #0: ffffffff906b5b30 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1218
 #1: ffff88804dfc86f0 (nlk_cb_mutex-GENERIC){+.+.}-{4:4}, at: __netlink_dump_start+0x150/0x990 net/netlink/af_netlink.c:2404
 #2: ffffffff8ec53188 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_rpc_status_get_dumpit+0xb7/0x1200 fs/nfsd/nfsctl.c:1484
2 locks held by syz.8.1101/11035:
 #0: ffffffff905f10f0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff8e7f3078 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock+0x19e/0x3c0 kernel/rcu/tree_exp.h:343
4 locks held by syz.5.1102/11034:
2 locks held by syz.4.1103/11040:
 #0: ffffffff905f10f0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff906099e8 (rtnl_mutex){+.+.}-{4:4}, at: ops_exit_rtnl_list net/core/net_namespace.c:173 [inline]
 #1: ffffffff906099e8 (rtnl_mutex){+.+.}-{4:4}, at: ops_undo_list+0x7ec/0xab0 net/core/net_namespace.c:248
2 locks held by syz.4.1103/11054:
 #0: ffffffff905f10f0 (pernet_ops_rwsem){++++}-{4:4}, at: copy_net_ns+0x451/0x7c0 net/core/net_namespace.c:577
 #1: ffffffff906099e8 (rtnl_mutex){+.+.}-{4:4}, at: wg_netns_pre_exit+0x1b/0x250 drivers/net/wireguard/device.c:419
2 locks held by syz.4.1103/11045:
2 locks held by syz.4.1103/11063:
1 lock held by syz.5.1108/11064:
 #0: ffffffff8e7f2f40 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x48/0x6d0 kernel/rcu/tree.c:3828
1 lock held by syz.5.1108/11065:
 #0: ffffffff8e7f2f40 (rcu_state.barrier_mutex){+.+.}-{4:4}, at: rcu_barrier+0x48/0x6d0 kernel/rcu/tree.c:3828

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 31 Comm: khungtaskd Tainted: G             L      syzkaller #0 PREEMPT(full) 
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xcc3/0xfe0 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:467
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 5928 Comm: kworker/0:6 Tainted: G             L      syzkaller #0 PREEMPT(full) 
Tainted: [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 01/24/2026
Workqueue: events_power_efficient wg_ratelimiter_gc_entries
RIP: 0010:_raw_spin_lock+0x5/0x40 kernel/locking/spinlock.c:153
Code: 89 df 31 f6 5b 5d e9 aa ad e3 f9 cc cc cc cc cc cc cc cc cc cc 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 53 <48> 89 fb bf 01 00 00 00 e8 6e 32 53 f6 ff 74 24 08 48 8d 7b 18 45
RSP: 0018:ffffc9000490fbf8 EFLAGS: 00000293
RAX: 0000000000000000 RBX: 0000000000002000 RCX: ffffffff8690f004
RDX: ffff88802eaf8000 RSI: ffffffff8690ed06 RDI: ffffffff8f9453e0
RBP: dffffc0000000000 R08: 0000000000000004 R09: 0000000000002000
R10: 0000000000001a8c R11: 0000000000000000 R12: 00000051271658cf
R13: ffffffff8f945240 R14: 0000000000000000 R15: 0000000000001a8c
FS:  0000000000000000(0000) GS:ffff888124392000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007fa8b9c0cf88 CR3: 0000000029de2000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 spin_lock include/linux/spinlock.h:341 [inline]
 wg_ratelimiter_gc_entries+0x62/0x510 drivers/net/wireguard/ratelimiter.c:63
 process_one_work+0x9c2/0x1840 kernel/workqueue.c:3275
 process_scheduled_works kernel/workqueue.c:3358 [inline]
 worker_thread+0x5da/0xe40 kernel/workqueue.c:3439
 kthread+0x370/0x450 kernel/kthread.c:467
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (3600):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/02/13 12:48 upstream 7449f86bafcd 6a673c50 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/12 10:13 upstream 1e83ccd5921a 76a109e2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/12 06:35 upstream 1e83ccd5921a 76a109e2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/11 20:50 upstream 192c0159402e 75707236 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/11 06:45 upstream dc855b77719f 441e25b7 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/10 12:32 upstream 72c395024dac a076df6f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/10 10:05 upstream 8a5203c630c6 4ab09a02 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 20:45 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 18:47 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 17:32 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 15:43 upstream 05f7e89ab973 df949cd9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/09 06:00 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 23:52 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 21:52 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 19:30 upstream e98f34af6116 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 17:41 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 16:40 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 11:56 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 09:24 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2026/02/08 02:18 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/08 00:56 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 19:50 upstream e7aa57247700 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 17:52 upstream 2687c848e578 4c131dc4 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 15:17 upstream 2687c848e578 f20fc9f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 13:50 upstream 2687c848e578 f20fc9f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/07 01:39 upstream 2687c848e578 f20fc9f9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/02/06 22:26 upstream b7ff7151e653 97745f52 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/06 20:39 upstream b7ff7151e653 97745f52 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/06 18:50 upstream b7ff7151e653 97745f52 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/06 16:59 upstream b7ff7151e653 97745f52 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/06 06:59 upstream 8fdb05de0e2d f03c4191 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/06 05:13 upstream 8fdb05de0e2d f03c4191 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/05 22:23 upstream 8fdb05de0e2d f03c4191 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/05 20:39 upstream f14faaf3a1fb 4936e85c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/05 14:19 upstream f14faaf3a1fb 4936e85c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/05 12:59 upstream f14faaf3a1fb 4936e85c .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/04 22:49 upstream 5fd0a1df5d05 ea10c935 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/04 10:32 upstream de0674d9bc69 42b01fab .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/04 06:49 upstream de0674d9bc69 42b01fab .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/03 20:24 upstream 6bd9ed02871f 6df4c87a .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/03 05:05 upstream dee65f79364c d78927dd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/03 03:30 upstream dee65f79364c d78927dd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/03 03:09 upstream dee65f79364c d78927dd .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/02 17:57 upstream 18f7fcd5e69a 018ebef2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/02 15:51 upstream 18f7fcd5e69a 018ebef2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/02 14:01 upstream 18f7fcd5e69a 018ebef2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/02 06:18 upstream 9f2693489ef8 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/01 16:37 upstream 162b42445b58 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/01 11:24 upstream 162b42445b58 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/02/01 09:57 upstream 162b42445b58 6b8752f2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/01/06 15:39 upstream 7f98ab9da046 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2024/07/06 12:12 upstream 1dd28064d416 bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/07/03 04:33 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/06/29 05:25 upstream 6c0483dbfe72 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/02/13 17:07 linux-next af98e93c5c39 1e62d198 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
2026/01/23 14:15 linux-next a0c666c25aee 3181850c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-rust-kasan-gce INFO: task hung in nfsd_umount
2026/01/23 12:09 linux-next a0c666c25aee 3181850c .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
* Struck through repros no longer work on HEAD.