INFO: task syz.9.611:12724 blocked for more than 143 seconds. Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0 "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. task:syz.9.611 state:D stack:24776 pid:12724 tgid:12723 ppid:11935 task_flags:0x400140 flags:0x00004004 Call Trace: context_switch kernel/sched/core.c:5367 [inline] __schedule+0x1ac3/0x5090 kernel/sched/core.c:6748 __schedule_loop kernel/sched/core.c:6825 [inline] schedule+0x163/0x360 kernel/sched/core.c:6840 schedule_preempt_disabled+0x13/0x30 kernel/sched/core.c:6897 __mutex_lock_common kernel/locking/mutex.c:664 [inline] __mutex_lock+0x7fa/0x1000 kernel/locking/mutex.c:732 open_dummy_log fs/jfs/jfs_logmgr.c:1191 [inline] lmLogOpen+0x1b2/0x1040 fs/jfs/jfs_logmgr.c:1066 jfs_mount_rw+0xef/0x680 fs/jfs/jfs_mount.c:257 jfs_fill_super+0x775/0xd90 fs/jfs/super.c:532 get_tree_bdev_flags+0x490/0x5c0 fs/super.c:1636 vfs_get_tree+0x90/0x2b0 fs/super.c:1759 do_new_mount+0x2cf/0xb70 fs/namespace.c:3878 do_mount fs/namespace.c:4218 [inline] __do_sys_mount fs/namespace.c:4429 [inline] __se_sys_mount+0x38c/0x400 fs/namespace.c:4406 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0xf3/0x230 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7f87d098e90a RSP: 002b:00007f87d17b2e68 EFLAGS: 00000246 ORIG_RAX: 00000000000000a5 RAX: ffffffffffffffda RBX: 00007f87d17b2ef0 RCX: 00007f87d098e90a RDX: 0000200000000040 RSI: 0000200000000080 RDI: 00007f87d17b2eb0 RBP: 0000200000000040 R08: 00007f87d17b2ef0 R09: 0000000000210004 R10: 0000000000210004 R11: 0000000000000246 R12: 0000200000000080 R13: 00007f87d17b2eb0 R14: 00000000000062f7 R15: 00002000000003c0 Showing all locks held in the system: 2 locks held by kworker/u8:0/12: 1 lock held by ksoftirqd/1/23: 1 lock held by khungtaskd/31: #0: ffffffff8eb3a760 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:331 [inline] #0: ffffffff8eb3a760 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:841 [inline] #0: ffffffff8eb3a760 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x30/0x180 kernel/locking/lockdep.c:6761 2 locks held by kworker/u8:2/36: #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319 #1: ffffc90000ac7c60 ((reaper_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc90000ac7c60 ((reaper_work).work){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319 2 locks held by kworker/u8:3/55: #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3213 [inline] #0: ffff88801ac89148 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0x990/0x18e0 kernel/workqueue.c:3319 #1: ffffc90000bf7c60 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3214 [inline] #1: ffffc90000bf7c60 ((work_completion)(&sub_info->work)){+.+.}-{0:0}, at: process_scheduled_works+0x9cb/0x18e0 kernel/workqueue.c:3319 2 locks held by kworker/u8:6/2957: 2 locks held by getty/5578: #0: ffff8880313fb0a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243 #1: ffffc90002fd62f0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x53d/0x16b0 drivers/tty/n_tty.c:2211 3 locks held by syz-executor/5810: #0: ffff8880b8739958 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:595 #1: ffff8880b8723b08 (&per_cpu_ptr(group->pcpu, cpu)->seq){-.-.}-{0:0}, at: psi_task_switch+0x41f/0x7a0 kernel/sched/psi.c:987 #2: ffffffff9a6a6230 (&obj_hash[i].lock){-.-.}-{2:2}, at: debug_object_activate+0x190/0x5c0 lib/debugobjects.c:818 1 lock held by syz.8.103/7117: #0: ffff88805cffa0e0 (&type->s_umount_key#71){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline] #0: ffff88805cffa0e0 (&type->s_umount_key#71){++++}-{4:4}, at: super_lock+0x27c/0x400 fs/super.c:120 2 locks held by syz-executor/9844: #0: ffff88805cffa0e0 (&type->s_umount_key#71){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline] #0: ffff88805cffa0e0 (&type->s_umount_key#71){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline] #0: ffff88805cffa0e0 (&type->s_umount_key#71){++++}-{4:4}, at: deactivate_super+0xb5/0xf0 fs/super.c:505 #1: ffffffff8ef5b7a8 (jfs_log_mutex){+.+.}-{4:4}, at: lmLogClose+0xb2/0x530 fs/jfs/jfs_logmgr.c:1444 2 locks held by syz.9.611/12724: #0: ffff888064a6e0e0 (&type->s_umount_key#70/1){+.+.}-{4:4}, at: alloc_super+0x221/0x9d0 fs/super.c:344 #1: ffffffff8ef5b7a8 (jfs_log_mutex){+.+.}-{4:4}, at: open_dummy_log fs/jfs/jfs_logmgr.c:1191 [inline] #1: ffffffff8ef5b7a8 (jfs_log_mutex){+.+.}-{4:4}, at: lmLogOpen+0x1b2/0x1040 fs/jfs/jfs_logmgr.c:1066 2 locks held by syz-executor/12975: #0: ffff88807e8000e0 (&type->s_umount_key#71){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline] #0: ffff88807e8000e0 (&type->s_umount_key#71){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline] #0: ffff88807e8000e0 (&type->s_umount_key#71){++++}-{4:4}, at: deactivate_super+0xb5/0xf0 fs/super.c:505 #1: ffffffff8ef5b7a8 (jfs_log_mutex){+.+.}-{4:4}, at: lmLogClose+0xb2/0x530 fs/jfs/jfs_logmgr.c:1444 1 lock held by syz.3.730/13844: #0: ffff88805cffa0e0 (&type->s_umount_key#71){++++}-{4:4}, at: __super_lock fs/super.c:58 [inline] #0: ffff88805cffa0e0 (&type->s_umount_key#71){++++}-{4:4}, at: super_lock+0x27c/0x400 fs/super.c:120 2 locks held by syz-executor/13853: #0: ffff888050f9a0e0 (&type->s_umount_key#71){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline] #0: ffff888050f9a0e0 (&type->s_umount_key#71){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline] #0: ffff888050f9a0e0 (&type->s_umount_key#71){++++}-{4:4}, at: deactivate_super+0xb5/0xf0 fs/super.c:505 #1: ffffffff8ef5b7a8 (jfs_log_mutex){+.+.}-{4:4}, at: lmLogClose+0xb2/0x530 fs/jfs/jfs_logmgr.c:1444 1 lock held by syz-executor/14500: #0: ffffffff8eb3fc78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline] #0: ffffffff8eb3fc78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x454/0x830 kernel/rcu/tree_exp.h:998 2 locks held by syz-executor/15063: #0: ffff88803319a0e0 (&type->s_umount_key#71){++++}-{4:4}, at: __super_lock fs/super.c:56 [inline] #0: ffff88803319a0e0 (&type->s_umount_key#71){++++}-{4:4}, at: __super_lock_excl fs/super.c:71 [inline] #0: ffff88803319a0e0 (&type->s_umount_key#71){++++}-{4:4}, at: deactivate_super+0xb5/0xf0 fs/super.c:505 #1: ffffffff8ef5b7a8 (jfs_log_mutex){+.+.}-{4:4}, at: lmLogClose+0xb2/0x530 fs/jfs/jfs_logmgr.c:1444 1 lock held by syz-executor/15342: 3 locks held by syz.7.864/15498: #0: ffff88808b880920 (&c->sb_lock){+.+.}-{4:4}, at: bch2_fs_alloc fs/bcachefs/super.c:833 [inline] #0: ffff88808b880920 (&c->sb_lock){+.+.}-{4:4}, at: bch2_fs_open+0x1612/0x31f0 fs/bcachefs/super.c:2065 #1: ffff88808b8849b0 (&c->mark_lock){++++}-{0:0}, at: bch2_sb_replicas_to_cpu_replicas+0x1e2/0x2f0 fs/bcachefs/replicas.c:600 #2: ffffffff8eb3fc78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: exp_funnel_lock kernel/rcu/tree_exp.h:336 [inline] #2: ffffffff8eb3fc78 (rcu_state.exp_mutex){+.+.}-{4:4}, at: synchronize_rcu_expedited+0x454/0x830 kernel/rcu/tree_exp.h:998 2 locks held by syz.3.865/15501: 1 lock held by sed/15531: #0: ffff8880b8639958 (&rq->__lock){-.-.}-{2:2}, at: raw_spin_rq_lock_nested+0x2a/0x140 kernel/sched/core.c:595 ============================================= NMI backtrace for cpu 0 CPU: 0 UID: 0 PID: 31 Comm: khungtaskd Not tainted 6.14.0-syzkaller-01103-g2df0c02dab82 #0 PREEMPT(full) Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2025 Call Trace: __dump_stack lib/dump_stack.c:94 [inline] dump_stack_lvl+0x241/0x360 lib/dump_stack.c:120 nmi_cpu_backtrace+0x4ab/0x4e0 lib/nmi_backtrace.c:113 nmi_trigger_cpumask_backtrace+0x198/0x320 lib/nmi_backtrace.c:62 trigger_all_cpu_backtrace include/linux/nmi.h:158 [inline] check_hung_uninterruptible_tasks kernel/hung_task.c:236 [inline] watchdog+0x1058/0x10a0 kernel/hung_task.c:399 kthread+0x7a9/0x920 kernel/kthread.c:464 ret_from_fork+0x4b/0x80 arch/x86/kernel/process.c:153 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 Sending NMI from CPU 0 to CPUs 1: NMI backtrace for cpu 1 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:48 [inline] NMI backtrace for cpu 1 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:106 [inline] NMI backtrace for cpu 1 skipped: idling at acpi_safe_halt+0x21/0x30 drivers/acpi/processor_idle.c:111