syzbot


INFO: task hung in jfs_flush_journal (5)

Status: upstream: reported on 2026/05/01 20:24
Subsystems: jfs
[Documentation on labels]
Reported-by: syzbot+139671f0b42887d37af9@syzkaller.appspotmail.com
First crash: 214d, last: 7h27m
Discussions (1)
Title Replies (including bot) Last reply
[syzbot] [jfs?] INFO: task hung in jfs_flush_journal (5) 0 (1) 2026/05/01 20:24
Similar bugs (8)
Kernel Title Rank 🛈 Repro Cause bisect Fix bisect Count Last Reported Patched Status
upstream INFO: task hung in jfs_flush_journal jfs 1 1 1296d 1296d 0/29 auto-obsoleted due to no activity on 2023/01/17 08:27
upstream INFO: task hung in jfs_flush_journal (3) jfs 1 4 1062d 1107d 0/29 auto-obsoleted due to no activity on 2023/09/08 02:21
upstream INFO: task hung in jfs_flush_journal (2) jfs 1 1 1197d 1197d 0/29 auto-obsoleted due to no activity on 2023/04/25 22:54
linux-6.1 INFO: task hung in jfs_flush_journal 1 1 630d 630d 0/3 auto-obsoleted due to no activity on 2024/11/22 04:48
upstream INFO: task hung in jfs_flush_journal (4) jfs 1 45 319d 594d 0/29 auto-obsoleted due to no activity on 2025/09/29 12:32
linux-4.19 INFO: task hung in jfs_flush_journal jfs 1 1 1208d 1208d 0/1 upstream: reported on 2023/01/14 13:39
linux-6.1 INFO: task hung in jfs_flush_journal (2) 1 1 329d 329d 0/3 auto-obsoleted due to no activity on 2025/09/19 18:53
linux-5.15 INFO: task hung in jfs_flush_journal 1 1 1076d 1076d 0/3 auto-obsoleted due to no activity on 2023/09/03 18:59

Sample crash report:
INFO: task jfsCommit:127 blocked in I/O wait for more than 143 seconds.
INFO: task jfsCommit:127 blocked in I/O wait for more than 143 seconds.
      Not tainted syzkaller #0
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:jfsCommit       state:D stack:26248 pid:127   tgid:127   ppid:2      task_flags:0x200040 flags:0x00080000
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5388 [inline]
 __schedule+0x1681/0x54c0 kernel/sched/core.c:7189
 __schedule_loop kernel/sched/core.c:7268 [inline]
 schedule+0x164/0x360 kernel/sched/core.c:7283
 io_schedule+0x80/0xe0 kernel/sched/core.c:8110
 __lock_metapage fs/jfs/jfs_metapage.c:52 [inline]
 lock_metapage+0x1ff/0x400 fs/jfs/jfs_metapage.c:66
 __get_metapage+0x49a/0xe20 fs/jfs/jfs_metapage.c:749
 diIAGRead+0xce/0x140 fs/jfs/jfs_imap.c:2672
 diFree+0x9dd/0x2ca0 fs/jfs/jfs_imap.c:959
 jfs_evict_inode+0x331/0x440 fs/jfs/inode.c:162
 evict+0x61e/0xb10 fs/inode.c:841
 txLazyCommit fs/jfs/jfs_txnmgr.c:2666 [inline]
 jfs_lazycommit+0x3ef/0xa10 fs/jfs/jfs_txnmgr.c:2735
 kthread+0x388/0x470 kernel/kthread.c:436
 ret_from_fork+0x514/0xb70 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
INFO: task syz-executor:7427 blocked for more than 143 seconds.
      Not tainted syzkaller #0
      Blocked by coredump.
"echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message.
task:syz-executor    state:D stack:22496 pid:7427  tgid:7427  ppid:1      task_flags:0x40014c flags:0x00080001
Call Trace:
 <TASK>
 context_switch kernel/sched/core.c:5388 [inline]
 __schedule+0x1681/0x54c0 kernel/sched/core.c:7189
 __schedule_loop kernel/sched/core.c:7268 [inline]
 schedule+0x164/0x360 kernel/sched/core.c:7283
 jfs_flush_journal+0x721/0xf50 fs/jfs/jfs_logmgr.c:1561
 jfs_sync_fs+0x7d/0xa0 fs/jfs/super.c:649
 sync_filesystem+0x1ce/0x250 fs/sync.c:66
 generic_shutdown_super+0x77/0x2d0 fs/super.c:625
 kill_block_super+0x44/0x90 fs/super.c:1725
 deactivate_locked_super+0xbc/0x130 fs/super.c:476
 cleanup_mnt+0x437/0x4d0 fs/namespace.c:1312
 task_work_run+0x1d9/0x270 kernel/task_work.c:233
 exit_task_work include/linux/task_work.h:40 [inline]
 do_exit+0x70f/0x22c0 kernel/exit.c:975
 do_group_exit+0x21b/0x2d0 kernel/exit.c:1117
 __do_sys_exit_group kernel/exit.c:1128 [inline]
 __se_sys_exit_group kernel/exit.c:1126 [inline]
 __x64_sys_exit_group+0x3f/0x40 kernel/exit.c:1126
 x64_sys_call+0x221a/0x2240 arch/x86/include/generated/asm/syscalls_64.h:232
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x15f/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f04a0f8cdd9
RSP: 002b:00007ffef839b2e8 EFLAGS: 00000246 ORIG_RAX: 00000000000000e7
RAX: ffffffffffffffda RBX: 00007f04a1022145 RCX: 00007f04a0f8cdd9
RDX: 0000000000000000 RSI: 0000000000000000 RDI: 0000000000000001
RBP: 0000000000000002 R08: 0000000000000000 R09: 00007f04a1022120
R10: 0000000000000000 R11: 0000000000000246 R12: 00007ffef839c5a0
R13: 00007f04a1022120 R14: 000000000005756b R15: 00007ffef839e760
 </TASK>

Showing all locks held in the system:
4 locks held by pr/legacy/17:
1 lock held by khungtaskd/37:
 #0: ffffffff8dfc8140 (rcu_read_lock){....}-{1:3}, at: rcu_lock_acquire include/linux/rcupdate.h:300 [inline]
 #0: ffffffff8dfc8140 (rcu_read_lock){....}-{1:3}, at: rcu_read_lock include/linux/rcupdate.h:838 [inline]
 #0: ffffffff8dfc8140 (rcu_read_lock){....}-{1:3}, at: debug_show_all_locks+0x2e/0x180 kernel/locking/lockdep.c:6775
3 locks held by kworker/u8:2/42:
 #0: ffff88801a074138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3277 [inline]
 #0: ffff88801a074138 ((wq_completion)events_unbound){+.+.}-{0:0}, at: process_scheduled_works+0xa35/0x1860 kernel/workqueue.c:3385
 #1: ffffc90000b47c40 ((linkwatch_work).work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3278 [inline]
 #1: ffffc90000b47c40 ((linkwatch_work).work){+.+.}-{0:0}, at: process_scheduled_works+0xa70/0x1860 kernel/workqueue.c:3385
 #2: ffffffff8f356af8 (rtnl_mutex){+.+.}-{4:4}, at: linkwatch_event+0xe/0x60 net/core/link_watch.c:313
2 locks held by jfsCommit/127:
 #0: ffff88802bb00900 (&(imap->im_aglock[index])){+.+.}-{4:4}, at: diFree+0x2e8/0x2ca0 fs/jfs/jfs_imap.c:889
 #1: ffff88803d1ceba0 (&jfs_ip->rdwrlock/1){.+.+}-{4:4}, at: diFree+0x306/0x2ca0 fs/jfs/jfs_imap.c:894
5 locks held by kworker/u8:8/169:
4 locks held by kworker/u8:12/1186:
3 locks held by kworker/u8:16/3432:
1 lock held by udevd/4964:
1 lock held by dhcpcd/5259:
 #0: ffffffff8f356af8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f356af8 (rtnl_mutex){+.+.}-{4:4}, at: devinet_ioctl+0x32b/0x1b30 net/ipv4/devinet.c:1120
2 locks held by getty/5347:
 #0: ffff8880370a30a0 (&tty->ldisc_sem){++++}-{0:0}, at: tty_ldisc_ref_wait+0x25/0x70 drivers/tty/tty_ldisc.c:243
 #1: ffffc90003cbe2e0 (&ldata->atomic_read_lock){+.+.}-{4:4}, at: n_tty_read+0x462/0x13a0 drivers/tty/n_tty.c:2211
3 locks held by kworker/1:7/6761:
 #0: ffff88801a037938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3277 [inline]
 #0: ffff88801a037938 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0xa35/0x1860 kernel/workqueue.c:3385
 #1: ffffc90003c07c40 (rx_mode_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3278 [inline]
 #1: ffffc90003c07c40 (rx_mode_work){+.+.}-{0:0}, at: process_scheduled_works+0xa70/0x1860 kernel/workqueue.c:3385
 #2: ffffffff8f356af8 (rtnl_mutex){+.+.}-{4:4}, at: netdev_rx_mode_work+0x1c/0x450 net/core/dev_addr_lists.c:1312
1 lock held by syz-executor/7427:
 #0: ffff88802de020d0 (&type->s_umount_key#77){+.+.}-{4:4}, at: __super_lock fs/super.c:58 [inline]
 #0: ffff88802de020d0 (&type->s_umount_key#77){+.+.}-{4:4}, at: __super_lock_excl fs/super.c:73 [inline]
 #0: ffff88802de020d0 (&type->s_umount_key#77){+.+.}-{4:4}, at: deactivate_super+0xa9/0xe0 fs/super.c:508
3 locks held by kworker/0:12/8298:
 #0: ffff88801a037938 ((wq_completion)events){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3277 [inline]
 #0: ffff88801a037938 ((wq_completion)events){+.+.}-{0:0}, at: process_scheduled_works+0xa35/0x1860 kernel/workqueue.c:3385
 #1: ffffc900048b7c40 (deferred_process_work){+.+.}-{0:0}, at: process_one_work kernel/workqueue.c:3278 [inline]
 #1: ffffc900048b7c40 (deferred_process_work){+.+.}-{0:0}, at: process_scheduled_works+0xa70/0x1860 kernel/workqueue.c:3385
 #2: ffffffff8f356af8 (rtnl_mutex){+.+.}-{4:4}, at: switchdev_deferred_process_work+0xe/0x20 net/switchdev/switchdev.c:104
2 locks held by syz-executor/9655:
1 lock held by syz-executor/10494:
 #0: ffffffff8f356af8 (rtnl_mutex){+.+.}-{4:4}, at: rtnl_net_lock include/linux/rtnetlink.h:130 [inline]
 #0: ffffffff8f356af8 (rtnl_mutex){+.+.}-{4:4}, at: inet_rtm_newaddr+0x404/0x1ad0 net/ipv4/devinet.c:978
4 locks held by syz.6.605/10505:

=============================================

NMI backtrace for cpu 1
CPU: 1 UID: 0 PID: 37 Comm: khungtaskd Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/18/2026
Call Trace:
 <TASK>
 dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120
 nmi_cpu_backtrace+0x274/0x2d0 lib/nmi_backtrace.c:113
 nmi_trigger_cpumask_backtrace+0x17a/0x300 lib/nmi_backtrace.c:62
 trigger_all_cpu_backtrace include/linux/nmi.h:162 [inline]
 __sys_info lib/sys_info.c:157 [inline]
 sys_info+0x135/0x170 lib/sys_info.c:165
 check_hung_uninterruptible_tasks kernel/hung_task.c:353 [inline]
 watchdog+0xfd3/0x1030 kernel/hung_task.c:561
 kthread+0x388/0x470 kernel/kthread.c:436
 ret_from_fork+0x514/0xb70 arch/x86/kernel/process.c:158
 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245
 </TASK>
Sending NMI from CPU 1 to CPUs 0:
NMI backtrace for cpu 0
CPU: 0 UID: 0 PID: 10505 Comm: syz.6.605 Not tainted syzkaller #0 PREEMPT_{RT,(full)} 
Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/18/2026
RIP: 0010:__this_cpu_preempt_check+0xe/0x20 lib/smp_processor_id.c:64
Code: aa 66 66 2e 0f 1f 84 00 00 00 00 00 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 90 f3 0f 1e fa 48 89 fe 48 c7 c7 e0 4d a7 8b <e9> fd fe ff ff cc cc cc cc cc cc cc cc cc cc cc cc cc 90 90 90 90
RSP: 0000:ffffc900056b7738 EFLAGS: 00000046
RAX: 0000000000000003 RBX: 0000000000000202 RCX: 0000000000000000
RDX: 00000000c1880467 RSI: ffffffff8d86225e RDI: ffffffff8ba74de0
RBP: ffff8880311f49c8 R08: ffffffff8b1eb760 R09: ffffffff8dfc8140
R10: 0000000000000000 R11: fffffbfff1f11c3f R12: 0000000000000003
R13: 0000000000000003 R14: ffff88803a2b04c0 R15: ffff8880311f3d80
FS:  00007f1e4131e6c0(0000) GS:ffff888126179000(0000) knlGS:0000000000000000
CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
CR2: 00007ff08fff3000 CR3: 000000008642a000 CR4: 00000000003526f0
Call Trace:
 <TASK>
 lockdep_recursion_finish kernel/locking/lockdep.c:470 [inline]
 lock_release+0x259/0x3c0 kernel/locking/lockdep.c:5891
 rt_spin_unlock+0x29/0x200 kernel/locking/spinlock_rt.c:80
 spin_unlock_irq include/linux/spinlock_rt.h:122 [inline]
 shmem_add_to_page_cache+0x926/0xbf0 mm/shmem.c:928
 shmem_alloc_and_add_folio mm/shmem.c:2001 [inline]
 shmem_get_folio_gfp+0x7e9/0x1a80 mm/shmem.c:2564
 shmem_get_folio mm/shmem.c:2670 [inline]
 shmem_write_begin+0x166/0x320 mm/shmem.c:3303
 generic_perform_write+0x2af/0x8b0 mm/filemap.c:4325
 shmem_file_write_iter+0xfb/0x120 mm/shmem.c:3478
 new_sync_write fs/read_write.c:595 [inline]
 vfs_write+0x629/0xba0 fs/read_write.c:688
 ksys_write+0x156/0x270 fs/read_write.c:740
 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline]
 do_syscall_64+0x15f/0xf80 arch/x86/entry/syscall_64.c:94
 entry_SYSCALL_64_after_hwframe+0x77/0x7f
RIP: 0033:0x7f1e4308d60e
Code: 08 0f 85 a5 a8 ff ff 49 89 fb 48 89 f0 48 89 d7 48 89 ce 4c 89 c2 4d 89 ca 4c 8b 44 24 08 4c 8b 4c 24 10 4c 89 5c 24 08 0f 05 <c3> 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 80 00 00 00 00 48 83 ec 08
RSP: 002b:00007f1e4131dda8 EFLAGS: 00000246 ORIG_RAX: 0000000000000001
RAX: ffffffffffffffda RBX: 00007f1e4131e6c0 RCX: 00007f1e4308d60e
RDX: 0000000001000000 RSI: 00007f1e38efe000 RDI: 0000000000000004
RBP: 0000000000000000 R08: 0000000000000000 R09: 0000000000000000
R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000004
R13: 00007f1e4131dee0 R14: 00007f1e4131dea0 R15: 00007f1e38efe000
 </TASK>

Crashes (33):
Time Kernel Commit Syzkaller Config Log Report Syz repro C repro VM info Assets (help?) Manager Title
2026/05/06 19:51 upstream 74fe02ce122a 1dddfd3d .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/05/02 05:09 upstream 6fe0be6dc7fa 753c55b9 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/04/28 19:07 upstream dca922e019dd ce741359 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/04/27 20:20 upstream 254f49634ee1 0f700595 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/04/25 16:35 upstream 27d128c1cff6 9c2d0995 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/04/25 08:57 upstream 27d128c1cff6 9c2d0995 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/04/24 22:11 upstream dd6c438c3e64 1c2b9291 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/04/24 09:50 upstream 45dcf5e28813 9cfb3ca7 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/04/23 09:45 upstream 2a4c0c11c019 b10da5ec .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/04/23 09:44 upstream 2a4c0c11c019 b10da5ec .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/04/21 01:59 upstream a5d1079c28a5 e65da4ee .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/04/20 09:14 upstream c1f49dea2b8f 303e2802 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/04/17 18:53 upstream 43cfbdda5af6 24ecfc1e .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/03/25 21:55 upstream bbeb83d3182a c6143aac .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/03/07 15:53 upstream 4ae12d8bd9a8 5cb44a80 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/03/06 11:21 upstream 5ee8dbf54602 31e9c887 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/02/03 03:23 upstream dee65f79364c d78927dd .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2025/11/15 13:27 upstream 7a0892d2836e f7988ea4 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2025/11/05 22:32 upstream 1c353dc8d962 a6c9c731 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-kasan-gce-root INFO: task hung in jfs_flush_journal
2025/10/18 13:36 upstream f406055cb18c 1c8c8cd8 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2025/10/04 07:39 upstream 9b0d551bcc05 49379ee0 .config console log report info [disk image] [vmlinux] [kernel image] ci2-upstream-fs INFO: task hung in jfs_flush_journal
2026/04/22 17:40 linux-next 70c8a7ec6715 4595e353 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/04/18 02:32 linux-next c7275b05bc42 24ecfc1e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/04/18 02:09 linux-next c7275b05bc42 24ecfc1e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/04/18 02:08 linux-next c7275b05bc42 24ecfc1e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/04/18 01:58 linux-next c7275b05bc42 24ecfc1e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/04/18 01:55 linux-next c7275b05bc42 24ecfc1e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/04/18 01:55 linux-next c7275b05bc42 24ecfc1e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/04/18 00:55 linux-next c7275b05bc42 24ecfc1e .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/04/06 07:37 linux-next cc13002a9f98 4440e7c2 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/03/23 18:19 linux-next 785f0eb2f85d 5e3db351 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2025/12/22 06:06 linux-next cc3aa43b44bd d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-linux-next-kasan-gce-root INFO: task hung in jfs_flush_journal
2026/01/11 12:07 git://git.kernel.org/pub/scm/linux/kernel/git/arm64/linux.git for-kernelci 59e4d31a0470 d6526ea3 .config console log report info [disk image] [vmlinux] [kernel image] ci-upstream-gce-arm64 INFO: task hung in jfs_flush_journal
* Struck through repros no longer work on HEAD.