rcu: INFO: rcu_preempt detected stalls on CPUs/tasks: rcu: Tasks blocked on level-0 rcu_node (CPUs 0-1): P4277/1:b..l P9/1:b..l (detected by 1, t=10502 jiffies, g=16445, q=160 ncpus=2) task:kworker/u4:0 state:R running task stack:22136 pid:9 ppid:2 flags:0x00004000 Workqueue: writeback wb_workfn (flush-8:0) Call Trace: context_switch kernel/sched/core.c:5245 [inline] __schedule+0x11d1/0x40e0 kernel/sched/core.c:6562 preempt_schedule_notrace+0xd9/0x120 kernel/sched/core.c:6824 preempt_schedule_notrace_thunk+0x16/0x18 arch/x86/entry/thunk_64.S:35 rcu_is_watching+0x76/0xa0 kernel/rcu/tree.c:722 trace_lock_acquire include/trace/events/lock.h:24 [inline] lock_acquire+0xe3/0x4a0 kernel/locking/lockdep.c:5633 rcu_lock_acquire include/linux/rcupdate.h:350 [inline] rcu_read_lock include/linux/rcupdate.h:791 [inline] percpu_ref_put_many include/linux/percpu-refcount.h:330 [inline] percpu_ref_put include/linux/percpu-refcount.h:351 [inline] blk_mq_sched_insert_requests+0x5e7/0x890 block/blk-mq-sched.c:494 blk_mq_dispatch_plug_list block/blk-mq.c:2808 [inline] blk_mq_flush_plug_list+0xb3a/0xc50 block/blk-mq.c:2857 __blk_flush_plug+0x3e0/0x460 block/blk-core.c:1179 blk_flush_plug include/linux/blkdev.h:1025 [inline] io_schedule_prepare kernel/sched/core.c:8767 [inline] io_schedule+0x74/0xd0 kernel/sched/core.c:8797 rq_qos_wait+0x225/0x2f0 block/blk-rq-qos.c:282 __wbt_wait block/blk-wbt.c:522 [inline] wbt_wait+0x377/0x660 block/blk-wbt.c:586 __rq_qos_throttle+0x61/0xa0 block/blk-rq-qos.c:66 rq_qos_throttle block/blk-rq-qos.h:207 [inline] blk_mq_get_new_requests block/blk-mq.c:2924 [inline] blk_mq_submit_bio+0x8cb/0x2010 block/blk-mq.c:3023 __submit_bio+0x1a7/0x290 block/blk-core.c:591 __submit_bio_noacct_mq block/blk-core.c:668 [inline] submit_bio_noacct_nocheck+0x7a2/0xaa0 block/blk-core.c:697 ext4_io_submit fs/ext4/page-io.c:378 [inline] io_submit_add_bh fs/ext4/page-io.c:421 [inline] ext4_bio_write_page+0x1449/0x2ae0 fs/ext4/page-io.c:559 mpage_submit_page+0x17a/0x210 fs/ext4/inode.c:2142 mpage_process_page_bufs+0x6d8/0x8b0 fs/ext4/inode.c:2256 mpage_prepare_extent_to_map+0xb34/0x1630 fs/ext4/inode.c:2681 ext4_writepages+0xab3/0x2f40 fs/ext4/inode.c:2809 do_writepages+0x3ba/0x640 mm/page-writeback.c:2491 __writeback_single_inode+0x156/0x1160 fs/fs-writeback.c:1622 writeback_sb_inodes+0xb30/0x1850 fs/fs-writeback.c:1913 __writeback_inodes_wb+0x12a/0x3f0 fs/fs-writeback.c:1984 wb_writeback+0x494/0xd50 fs/fs-writeback.c:2089 wb_check_old_data_flush fs/fs-writeback.c:2189 [inline] wb_do_writeback fs/fs-writeback.c:2242 [inline] wb_workfn+0xb68/0xee0 fs/fs-writeback.c:2270 process_one_work+0x8a2/0x1160 kernel/workqueue.c:2292 worker_thread+0xaa2/0x1270 kernel/workqueue.c:2439 kthread+0x29d/0x330 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 task:syz-executor state:R running task stack:21520 pid:4277 ppid:4265 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5245 [inline] __schedule+0x11d1/0x40e0 kernel/sched/core.c:6562 preempt_schedule_irq+0xbb/0x160 kernel/sched/core.c:6874 irqentry_exit+0x63/0x70 kernel/entry/common.c:439 asm_sysvec_apic_timer_interrupt+0x16/0x20 arch/x86/include/asm/idtentry.h:691 RIP: 0010:lock_acquire+0x225/0x4a0 kernel/locking/lockdep.c:5666 Code: f7 84 24 80 00 00 00 00 02 00 00 43 c6 44 3d 04 f8 0f 85 f0 00 00 00 41 f7 c6 00 02 00 00 74 01 fb 48 c7 44 24 60 0e 36 e0 45 <4b> c7 44 3d 00 00 00 00 00 43 c7 44 3d 08 00 00 00 00 65 48 8b 04 RSP: 0018:ffffc90004347520 EFLAGS: 00000206 RAX: 0000000000000001 RBX: 0000000000000000 RCX: 1c77d381d01ae300 RDX: 0000000000000000 RSI: ffffffff8a8c23a0 RDI: ffffffff8adf0c20 RBP: ffffc90004347640 R08: dffffc0000000000 R09: 1ffffffff215e648 R10: dffffc0000000000 R11: fffffbfff215e649 R12: 0000000000000000 R13: 1ffff92000868eb0 R14: 0000000000000246 R15: dffffc0000000000 rcu_lock_acquire include/linux/rcupdate.h:350 [inline] rcu_read_lock include/linux/rcupdate.h:791 [inline] page_ext_get+0x3a/0x2a0 mm/page_ext.c:157 __reset_page_owner+0x31/0x1a0 mm/page_owner.c:144 reset_page_owner include/linux/page_owner.h:24 [inline] free_pages_prepare mm/page_alloc.c:1459 [inline] free_pcp_prepare mm/page_alloc.c:1509 [inline] free_unref_page_prepare+0x8b4/0x9a0 mm/page_alloc.c:3384 free_unref_page_list+0xbb/0x8e0 mm/page_alloc.c:3525 release_pages+0x1fa6/0x2220 mm/swap.c:1035 __pagevec_release+0x6d/0xe0 mm/swap.c:1055 pagevec_release include/linux/pagevec.h:71 [inline] folio_batch_release include/linux/pagevec.h:135 [inline] shmem_undo_range+0x7c2/0x20c0 mm/shmem.c:946 shmem_truncate_range mm/shmem.c:1062 [inline] shmem_evict_inode+0x25b/0xa80 mm/shmem.c:1171 evict+0x4c9/0x8d0 fs/inode.c:705 do_unlinkat+0x388/0x580 fs/namei.c:4405 __do_sys_unlink fs/namei.c:4446 [inline] __se_sys_unlink fs/namei.c:4444 [inline] __x64_sys_unlink+0x45/0x50 fs/namei.c:4444 do_syscall_x64 arch/x86/entry/common.c:46 [inline] do_syscall_64+0x4c/0xa0 arch/x86/entry/common.c:76 entry_SYSCALL_64_after_hwframe+0x68/0xd2 RIP: 0033:0x7ff9c7b99fa7 RSP: 002b:00007ffe364bea98 EFLAGS: 00000206 ORIG_RAX: 0000000000000057 RAX: ffffffffffffffda RBX: 0000000000000000 RCX: 00007ff9c7b99fa7 RDX: 00007ffe364beac0 RSI: 00007ffe364beb50 RDI: 00007ffe364beb50 RBP: 00007ffe364beb50 R08: 00007ffe364bfb50 R09: 00000000ffffffff R10: 0000000000000100 R11: 0000000000000206 R12: 00007ffe364bfbe0 R13: 00007ff9c7c0471f R14: 000000000003279e R15: 00007ffe364bfc20 rcu: rcu_preempt kthread starved for 9771 jiffies! g16445 f0x0 RCU_GP_WAIT_FQS(5) ->state=0x0 ->cpu=0 rcu: Unless rcu_preempt kthread gets sufficient CPU time, OOM is now expected behavior. rcu: RCU grace-period kthread stack dump: task:rcu_preempt state:R running task stack:27736 pid:16 ppid:2 flags:0x00004000 Call Trace: context_switch kernel/sched/core.c:5245 [inline] __schedule+0x11d1/0x40e0 kernel/sched/core.c:6562 schedule+0xb9/0x180 kernel/sched/core.c:6638 schedule_timeout+0x184/0x2d0 kernel/time/timer.c:2168 rcu_gp_fqs_loop+0x303/0x1340 kernel/rcu/tree.c:1706 rcu_gp_kthread+0x99/0x3b0 kernel/rcu/tree.c:1905 kthread+0x29d/0x330 kernel/kthread.c:376 ret_from_fork+0x1f/0x30 arch/x86/entry/entry_64.S:295 rcu: Stack dump where RCU GP kthread last ran: Sending NMI from CPU 1 to CPUs 0: NMI backtrace for cpu 0 skipped: idling at native_safe_halt arch/x86/include/asm/irqflags.h:51 [inline] NMI backtrace for cpu 0 skipped: idling at arch_safe_halt arch/x86/include/asm/irqflags.h:89 [inline] NMI backtrace for cpu 0 skipped: idling at default_idle+0xb/0x10 arch/x86/kernel/process.c:741