syzbot


INFO: task hung in nfsd_umount

Status: upstream: reported on 2024/07/07 04:37
Subsystems: nfs
[Documentation on labels]
Reported-by: syzbot+b568ba42c85a332a88ee@syzkaller.appspotmail.com
First crash: 687d, last: 2h25m
✨ AI Jobs (1)
ID Workflow Result Correct Bug Created Started Finished Revision Error
a607d1e4-f56a-479f-bd5d-819025c7ef3e repro INFO: task hung in nfsd_umount 2026/03/07 03:10 2026/03/07 03:11 2026/03/07 03:20 31e9c887f7dc24e04b3ca70d0d54fc34141844b0
Discussions (3)
Title Replies (including bot) Last reply
[syzbot] Monthly nfs report (Jul 2025) 0 (1) 2025/07/04 12:38
[syzbot] Monthly nfs report (Jun 2025) 0 (1) 2025/06/03 09:38
[syzbot] [nfs?] INFO: task hung in nfsd_umount 3 (4) 2024/09/21 07:58

Sample crash report:
INFO: task syz-executor:28260 blocked for more than 143 seconds.
      Tainted: G     U       L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:24392 pid:28260 tgid:28260 ppid:1      task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7008
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7065
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
 nfsd_umount+0x3b/0x60 fs/nfsd/nfsctl.c:1364
 deactivate_locked_super+0xc1/0x1b0 fs/super.c:476
 deactivate_super fs/super.c:509 [inline]
 deactivate_super+0xe7/0x110 fs/super.c:505
 cleanup_mnt+0x21f/0x450 fs/namespace.c:1312
 task_work_run+0x150/0x240 kernel/task_work.c:233
 resume_user_mode_work include/linux/resume_user_mode.h:50 [inline]
 __exit_to_user_mode_loop kernel/entry/common.c:67 [inline]
 exit_to_user_mode_loop+0x100/0x4a0 kernel/entry/common.c:98
 __exit_to_user_mode_prepare include/linux/irq-entry-common.h:226 [inline]
 syscall_exit_to_user_mode_prepare include/linux/irq-entry-common.h:256 [inline]
 syscall_exit_to_user_mode include/linux/entry-common.h:325 [inline]
 do_syscall_64+0x668/0xf80 arch/x86/entry/syscall_64.c:100
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f816479da57
RSP: 002b:00007ffc1bb4d5a8 EFLAGS: 00000246 ORIG_RAX: 00000000000000a6
RAX: 0000000000000000 RBX: 00007f8164832048 RCX: 00007f816479da57
RDX: 0000000000000000 RSI: 0000000000000009 RDI: 00007ffc1bb4d660
RBP: 00007ffc1bb4d660 R08: 00007ffc1bb4e660 R09: 00000000ffffffff
R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffc1bb4e6f0
R13: 00007f8164832048 R14: 0000000000192ef7 R15: 00007ffc1bb4e730
 </TASK>
INFO: task syz.3.7302:2146 blocked for more than 143 seconds.
      Tainted: G     U       L      syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz.3.7302      state:D stack:27464 pid:2146  tgid:2145  ppid:24941  task_flags:0x400140 flags:0x00080002
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5298 [inline]
 __schedule+0xfee/0x6120 kernel/sched/core.c:6911
 __schedule_loop kernel/sched/core.c:6993 [inline]
 schedule+0xdd/0x390 kernel/sched/core.c:7008
 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:7065
 __mutex_lock_common kernel/locking/mutex.c:692 [inline]
 __mutex_lock+0xc9a/0x1b90 kernel/locking/mutex.c:776
 nfsd_nl_threads_set_doit+0x6c1/0xc00 fs/nfsd/nfsctl.c:1607
 genl_family_rcv_msg_doit+0x214/0x300 net/netlink/genetlink.c:1114
 genl_family_rcv_msg net/netlink/genetlink.c:1194 [inline]
 genl_rcv_msg+0x560/0x800 net/netlink/genetlink.c:1209
 netlink_rcv_skb+0x159/0x420 net/netlink/af_netlink.c:2550
 genl_rcv+0x28/0x40 net/netlink/genetlink.c:1218
 netlink_unicast_kernel net/netlink/af_netlink.c:1318 [inline]
 netlink_unicast+0x5aa/0x870 net/netlink/af_netlink.c:1344
 netlink_sendmsg+0x8b0/0xda0 net/netlink/af_netlink.c:1894
 sock_sendmsg_nosec net/socket.c:727 [inline]
 __sock_sendmsg net/socket.c:742 [inline]
 ____sys_sendmsg+0x9e1/0xb70 net/socket.c:2592
 ___sys_sendmsg+0x190/0x1e0 net/socket.c:2646
 __sys_sendmsg+0x170/0x220 net/socket.c:2678
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x106/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7feadef9c819
RSP: 002b:00007feadd1f6028 EFLAGS: 00000246 ORIG_RAX: 000000000000002e
RAX: ffffffffffffffda RBX: 00007feadf215fa0 RCX: 00007feadef9c819
RDX: 000000000000c840 RSI: 0000200000000480 RDI: 0000000000000003
RBP: 00007feadf032c91 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000
R13: 00007feadf216038 R14: 00007feadf215fa0 R15: 00007ffe575d5bd8
 </TASK>

Showing all locks held in the system:
1 lock held by khungtaskd/30:
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:312 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:850 [inline]
 #0: ffffffff8e7e7760 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x3d/0x184 kernel/locking/lockdep.c:6775
1 lock held by udevd/5195:
 #0: ffff88801c6b5988 (&root->kernfs_rwsem){++++}-{4:4}, at: kernfs_dop_revalidate+0xa5/0x740 fs/kernfs/dir.c:1185
1 lock held by syz.0.971/10213:
4 locks held by kworker/u10:20/27641:
 #0: ffff88801c6b6948 ((wq_completion)netns){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc9000489fd08 (net_cleanup_work){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
 #2: ffffffff905fe850 (pernet_ops_rwsem){++++}-{4:4}, at: cleanup_net+0xb8/0x920 net/core/net_namespace.c:675
 #3: ffff88801c6b5988 (&root->kernfs_rwsem){++++}-{4:4}, at: kernfs_remove_by_name_ns+0x3d/0xf0 fs/kernfs/dir.c:1717
2 locks held by kworker/u10:22/27643:
 #0: ffff88801e757148 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc900048efd08 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
2 locks held by kworker/u10:27/27648:
 #0: ffff88801e757148 ((wq_completion)iou_exit){+.+.}-{0:0}, at: process_one_work+0x1310/0x19a0 kernel/workqueue.c:3251
 #1: ffffc900049dfd08 ((work_completion)(&ctx->exit_work)){+.+.}-{0:0}, at: process_one_work+0x988/0x19a0 kernel/workqueue.c:3252
2 locks held by syz-executor/28260:
 #0: ffff888025aa40e0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888025aa40e0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff888025aa40e0 (&type->s_umount_key#56){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff888025aa40e0 (&type->s_umount_key#56){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by getty/32641:
 #0: ffff8880331bd0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x24/0x80 drivers/tty/tty_ldisc.c:243
 #1: ffffc900045732f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x419/0x1500 drivers/tty/n_tty.c:2211
2 locks held by syz.9.7278/2038:
 #0: ffffffff906c33f0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_listener_set_doit+0xd5/0x1a80 fs/nfsd/nfsctl.c:1903
2 locks held by syz.3.7302/2146:
 #0: ffffffff906c33f0 (cb_lock){++++}-{4:4}, at: genl_rcv+0x19/0x40 net/netlink/genetlink.c:1217
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_nl_threads_set_doit+0x6c1/0xc00 fs/nfsd/nfsctl.c:1607
2 locks held by syz-executor/2190:
 #0: ffff88804dbde0e0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88804dbde0e0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88804dbde0e0 (&type->s_umount_key#56){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff88804dbde0e0 (&type->s_umount_key#56){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz-executor/2303:
 #0: ffff888025cc60e0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff888025cc60e0 (&type->s_umount_key#56){++++}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff888025cc60e0 (&type->s_umount_key#56){++++}-{4:4}, at: deactivate_super fs/super.c:508 [inline]
 #0: ffff888025cc60e0 (&type->s_umount_key#56){++++}-{4:4}, at: deactivate_super+0xdf/0x110 fs/super.c:505
 #1: ffffffff8ec58e28 (nfsd_mutex){+.+.}-{4:4}, at: nfsd_shutdown_threads+0x5b/0xf0 fs/nfsd/nfssvc.c:576
2 locks held by syz.8.7357/2642:
2 locks held by syz.5.7387/3075:
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open_by_driver drivers/tty/tty_io.c:2037 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open+0x539/0xfa0 drivers/tty/tty_io.c:2120
 #1: ffff88802993c1c0 (&tty->legacy_mutex){+.+.}-{4:4}, at: tty_init_dev.part.0+0x39/0x470 drivers/tty/tty_io.c:1406
2 locks held by syz.4.7397/3152:
 #0: ffff8880a08dce80 (&p->lock){+.+.}-{4:4}, at: seq_read_iter+0xe1/0x1270 fs/seq_file.c:183
 #1: ffffffff8e7d4248 (console_mutex){+.+.}-{4:4}, at: c_start+0x17/0x100 fs/proc/consoles.c:82
1 lock held by syz.7.7398/3164:
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open_by_driver drivers/tty/tty_io.c:2037 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open+0x539/0xfa0 drivers/tty/tty_io.c:2120
1 lock held by syz.1.7399/3203:
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: class_mutex_constructor include/linux/mutex.h:253 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: ptmx_open drivers/tty/pty.c:798 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: ptmx_open+0x150/0x3c0 drivers/tty/pty.c:765
1 lock held by syz.9.7404/3382:
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open_by_driver drivers/tty/tty_io.c:2037 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open+0x539/0xfa0 drivers/tty/tty_io.c:2120
1 lock held by syz.3.7402/3510:
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open_by_driver drivers/tty/tty_io.c:2037 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open+0x539/0xfa0 drivers/tty/tty_io.c:2120
1 lock held by syz.3.7402/3512:
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open_by_driver drivers/tty/tty_io.c:2037 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open+0x539/0xfa0 drivers/tty/tty_io.c:2120
1 lock held by syz.3.7402/3515:
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open_by_driver drivers/tty/tty_io.c:2037 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open+0x539/0xfa0 drivers/tty/tty_io.c:2120
1 lock held by syz.6.7410/3607:
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open_by_driver drivers/tty/tty_io.c:2037 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open+0x539/0xfa0 drivers/tty/tty_io.c:2120
1 lock held by syz.2.7435/3821:
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open_by_driver drivers/tty/tty_io.c:2037 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open+0x539/0xfa0 drivers/tty/tty_io.c:2120
1 lock held by syz.5.7421/3828:
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open_by_driver drivers/tty/tty_io.c:2037 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open+0x539/0xfa0 drivers/tty/tty_io.c:2120
1 lock held by syz.5.7421/3829:
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open_by_driver drivers/tty/tty_io.c:2037 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open+0x539/0xfa0 drivers/tty/tty_io.c:2120
1 lock held by syz.0.7438/3843:
1 lock held by syz.0.7446/3877:
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open_by_driver drivers/tty/tty_io.c:2037 [inline]
 #0: ffffffff8f4f17a8 (tty_mutex){+.+.}-{4:4}, at: tty_open+0x539/0xfa0 drivers/tty/tty_io.c:2120
1 lock held by syz.8.7454/3914:
4 locks held by syz-executor/3919:
2 locks held by syz-executor/3923:

=============================================

NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 30 Comm: khungtaskd Tainted: G     U       L      syzkaller #0 PREEMPT(full) 
Tainted: [U]=USER, [L]=SOFTLOCKUP
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 03/18/2026
Call Trace:
 <TASK>
 __dump_stack lib/dump_stack.c:94 [inline]
 dump_stack_lvl+0x100/0x190 lib/dump_stack.c:120
 nmi_cpu_backtrace.cold+0x12d/0x151 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x1d7/0x230 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:161 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x141/0x190 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:346 [inline]
 watchdog+0xd25/0x1050 kernel/hung_task.c:515
 kthread+0x370/0x450 kernel/kthread.c:436
 ret_from_fork+0x754/0xd80 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>

Crashes (3952):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/04/06 14:16 upstream 591cd656a1bf 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 08:28 upstream 591cd656a1bf 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 06:01 upstream 1791c390149f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 03:39 upstream 1791c390149f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/06 01:55 upstream 1791c390149f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/05 14:35 upstream 3aae9383f42f 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/04 20:15 upstream 7ca6d1cfec80 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/04 16:38 upstream 7ca6d1cfec80 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2026/04/04 14:04 upstream 7ca6d1cfec80 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/04 13:00 upstream 631919fb12fe 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/04 05:34 upstream 631919fb12fe 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 18:05 upstream d8a9a4b11a13 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 16:37 upstream d8a9a4b11a13 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 15:25 upstream d8a9a4b11a13 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 10:14 upstream 5619b098e2fb 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 08:21 upstream 5619b098e2fb 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/03 02:17 upstream 5619b098e2fb 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 16:31 upstream 9147566d8016 91bc79b0 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 14:18 upstream 9147566d8016 8b15d4ae .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/04/02 12:53 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-selinux-root INFO: task hung in nfsd_umount
2026/04/02 08:39 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 06:57 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 04:59 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 04:12 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/02 02:22 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/01 21:19 upstream 9147566d8016 0cb124d5 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/04/01 04:40 upstream d0c3bcd5b897 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/28 12:52 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/28 10:31 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/28 06:13 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/28 02:23 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 22:42 upstream 7df48e363130 ef441708 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 10:27 upstream 46b513250491 4b3d9a38 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 08:31 upstream 46b513250491 4b3d9a38 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/27 00:49 upstream 0138af2472df 4b3d9a38 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 09:38 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 07:53 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 05:10 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/26 00:57 upstream d2a43e7f89da c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 23:17 upstream bbeb83d3182a 4367a094 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 22:08 upstream bbeb83d3182a 4367a094 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 19:22 upstream bbeb83d3182a 4367a094 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 10:40 upstream 24f9515de877 b4723e5f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 09:32 upstream 24f9515de877 b4723e5f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/25 07:30 upstream 24f9515de877 b4723e5f .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/24 23:24 upstream e3c33bc767b5 74e70d19 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/24 21:52 upstream e3c33bc767b5 74e70d19 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2026/03/14 17:38 upstream 1c9982b49613 ee8d34d6 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in nfsd_umount
2026/03/06 23:37 upstream 651690480a96 5cb44a80 .config console log report info [disk image] [vmlinux] [kernel image] ci-qemu-gce-upstream-auto INFO: task hung in nfsd_umount
2024/07/06 12:12 upstream 1dd28064d416 bc4ebbb5 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/07/03 04:33 upstream e9d22f7a6655 1ecfa2d8 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2024/06/29 05:25 upstream 6c0483dbfe72 757f06b1 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-smack-root INFO: task hung in nfsd_umount
2026/03/29 20:31 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-rust-kasan-gce INFO: task hung in nfsd_umount
2026/03/29 05:26 linux-next 3b058d1aeeef 356bdfc9 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-rust-kasan-gce INFO: task hung in nfsd_umount
2026/03/28 00:20 linux-next e77a5a5cfe43 74a13a23 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
2026/03/24 14:27 linux-next 09c0f7f1bcdb 74e70d19 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in nfsd_umount
* Struck through repros no longer work on HEAD.