====================================================== WARNING: possible circular locking dependency detected syzkaller #0 Not tainted ------------------------------------------------------ syz.0.4726/20005 is trying to acquire lock: ffffffff8decac98 (pcpu_alloc_mutex){+.+.}-{4:4}, at: pcpu_alloc_noprof+0x202/0x1950 mm/percpu.c:1788 but task is already holding lock: ffff88802616d120 (&q->q_usage_counter(io)#49){++++}-{0:0}, at: nbd_start_device+0x17f/0xb20 drivers/block/nbd.c:1489 which lock already depends on the new lock. the existing dependency chain (in reverse order) is: -> #2 (&q->q_usage_counter(io)#49){++++}-{0:0}: blk_alloc_queue+0x54e/0x690 block/blk-core.c:461 blk_mq_alloc_queue block/blk-mq.c:4429 [inline] __blk_mq_alloc_disk+0x197/0x390 block/blk-mq.c:4476 nbd_dev_add+0x499/0xb50 drivers/block/nbd.c:1954 nbd_init+0x168/0x1f0 drivers/block/nbd.c:2692 do_one_initcall+0x250/0x8d0 init/main.c:1382 do_initcall_level+0x104/0x190 init/main.c:1444 do_initcalls+0x59/0xa0 init/main.c:1460 kernel_init_freeable+0x2a6/0x3e0 init/main.c:1692 kernel_init+0x1d/0x1d0 init/main.c:1582 ret_from_fork+0x51e/0xb90 arch/x86/kernel/process.c:158 ret_from_fork_asm+0x1a/0x30 arch/x86/entry/entry_64.S:245 -> #1 (fs_reclaim){+.+.}-{0:0}: __fs_reclaim_acquire mm/page_alloc.c:4348 [inline] fs_reclaim_acquire+0x71/0x100 mm/page_alloc.c:4362 might_alloc include/linux/sched/mm.h:317 [inline] prepare_alloc_pages+0x152/0x6b0 mm/page_alloc.c:5018 __alloc_frozen_pages_noprof+0x12f/0x380 mm/page_alloc.c:5239 __alloc_pages_noprof+0xa/0x30 mm/page_alloc.c:5284 __alloc_pages_node_noprof include/linux/gfp.h:289 [inline] alloc_pages_node_noprof include/linux/gfp.h:316 [inline] pcpu_alloc_pages mm/percpu-vm.c:95 [inline] pcpu_populate_chunk+0x182/0xb30 mm/percpu-vm.c:285 pcpu_alloc_noprof+0xc0f/0x1950 mm/percpu.c:1876 bpf_map_alloc_percpu+0x72/0x1f0 kernel/bpf/syscall.c:583 prealloc_init+0x217/0x640 kernel/bpf/hashtab.c:329 htab_map_alloc+0x69e/0xc90 kernel/bpf/hashtab.c:554 map_create+0xafd/0x16b0 kernel/bpf/syscall.c:1507 __sys_bpf+0x6e1/0x950 kernel/bpf/syscall.c:6210 __do_sys_bpf kernel/bpf/syscall.c:6341 [inline] __se_sys_bpf kernel/bpf/syscall.c:6339 [inline] __x64_sys_bpf+0x7c/0x90 kernel/bpf/syscall.c:6339 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f -> #0 (pcpu_alloc_mutex){+.+.}-{4:4}: check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain kernel/locking/lockdep.c:3908 [inline] __lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237 lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] _mutex_lock_killable+0x63/0x1d0 kernel/locking/rtmutex_api.c:573 pcpu_alloc_noprof+0x202/0x1950 mm/percpu.c:1788 init_alloc_hint lib/sbitmap.c:16 [inline] sbitmap_init_node+0x1e1/0x640 lib/sbitmap.c:126 sbitmap_queue_init_node+0x3e/0x4d0 lib/sbitmap.c:454 bt_alloc block/blk-mq-tag.c:546 [inline] blk_mq_init_tags+0x164/0x2d0 block/blk-mq-tag.c:571 blk_mq_alloc_rq_map block/blk-mq.c:3556 [inline] blk_mq_alloc_map_and_rqs+0xbb/0x9c0 block/blk-mq.c:4124 __blk_mq_alloc_map_and_rqs block/blk-mq.c:4146 [inline] blk_mq_realloc_tag_set_tags block/blk-mq.c:4817 [inline] __blk_mq_update_nr_hw_queues block/blk-mq.c:5153 [inline] blk_mq_update_nr_hw_queues+0xa88/0x1b10 block/blk-mq.c:5205 nbd_start_device+0x17f/0xb20 drivers/block/nbd.c:1489 nbd_start_device_ioctl drivers/block/nbd.c:1548 [inline] __nbd_ioctl drivers/block/nbd.c:1623 [inline] nbd_ioctl+0x57b/0xe40 drivers/block/nbd.c:1663 blkdev_ioctl+0x5e6/0x750 block/ioctl.c:804 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:597 [inline] __se_sys_ioctl+0xff/0x170 fs/ioctl.c:583 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f other info that might help us debug this: Chain exists of: pcpu_alloc_mutex --> fs_reclaim --> &q->q_usage_counter(io)#49 Possible unsafe locking scenario: CPU0 CPU1 ---- ---- lock(&q->q_usage_counter(io)#49); lock(fs_reclaim); lock(&q->q_usage_counter(io)#49); lock(pcpu_alloc_mutex); *** DEADLOCK *** 4 locks held by syz.0.4726/20005: #0: ffff8880261679b0 (&set->update_nr_hwq_lock){++++}-{4:4}, at: blk_mq_update_nr_hw_queues+0xbf/0x1b10 block/blk-mq.c:5203 #1: ffff8880261678c8 (&set->tag_list_lock){+.+.}-{4:4}, at: blk_mq_update_nr_hw_queues+0xd2/0x1b10 block/blk-mq.c:5204 #2: ffff88802616d120 (&q->q_usage_counter(io)#49){++++}-{0:0}, at: nbd_start_device+0x17f/0xb20 drivers/block/nbd.c:1489 #3: ffff88802616d158 (&q->q_usage_counter(queue)#33){+.+.}-{0:0}, at: nbd_start_device+0x17f/0xb20 drivers/block/nbd.c:1489 stack backtrace: CPU: 0 UID: 0 PID: 20005 Comm: syz.0.4726 Not tainted syzkaller #0 PREEMPT_{RT,(full)} Hardware name: Google Google Compute Engine/Google Compute Engine, BIOS Google 02/12/2026 Call Trace: dump_stack_lvl+0xe8/0x150 lib/dump_stack.c:120 print_circular_bug+0x2e1/0x300 kernel/locking/lockdep.c:2043 check_noncircular+0x12e/0x150 kernel/locking/lockdep.c:2175 check_prev_add kernel/locking/lockdep.c:3165 [inline] check_prevs_add kernel/locking/lockdep.c:3284 [inline] validate_chain kernel/locking/lockdep.c:3908 [inline] __lock_acquire+0x15a5/0x2cf0 kernel/locking/lockdep.c:5237 lock_acquire+0xf0/0x2e0 kernel/locking/lockdep.c:5868 __mutex_lock_common kernel/locking/rtmutex_api.c:533 [inline] _mutex_lock_killable+0x63/0x1d0 kernel/locking/rtmutex_api.c:573 pcpu_alloc_noprof+0x202/0x1950 mm/percpu.c:1788 init_alloc_hint lib/sbitmap.c:16 [inline] sbitmap_init_node+0x1e1/0x640 lib/sbitmap.c:126 sbitmap_queue_init_node+0x3e/0x4d0 lib/sbitmap.c:454 bt_alloc block/blk-mq-tag.c:546 [inline] blk_mq_init_tags+0x164/0x2d0 block/blk-mq-tag.c:571 blk_mq_alloc_rq_map block/blk-mq.c:3556 [inline] blk_mq_alloc_map_and_rqs+0xbb/0x9c0 block/blk-mq.c:4124 __blk_mq_alloc_map_and_rqs block/blk-mq.c:4146 [inline] blk_mq_realloc_tag_set_tags block/blk-mq.c:4817 [inline] __blk_mq_update_nr_hw_queues block/blk-mq.c:5153 [inline] blk_mq_update_nr_hw_queues+0xa88/0x1b10 block/blk-mq.c:5205 nbd_start_device+0x17f/0xb20 drivers/block/nbd.c:1489 nbd_start_device_ioctl drivers/block/nbd.c:1548 [inline] __nbd_ioctl drivers/block/nbd.c:1623 [inline] nbd_ioctl+0x57b/0xe40 drivers/block/nbd.c:1663 blkdev_ioctl+0x5e6/0x750 block/ioctl.c:804 vfs_ioctl fs/ioctl.c:51 [inline] __do_sys_ioctl fs/ioctl.c:597 [inline] __se_sys_ioctl+0xff/0x170 fs/ioctl.c:583 do_syscall_x64 arch/x86/entry/syscall_64.c:63 [inline] do_syscall_64+0x14d/0xf80 arch/x86/entry/syscall_64.c:94 entry_SYSCALL_64_after_hwframe+0x77/0x7f RIP: 0033:0x7fdae94cc629 Code: ff c3 66 2e 0f 1f 84 00 00 00 00 00 0f 1f 44 00 00 48 89 f8 48 89 f7 48 89 d6 48 89 ca 4d 89 c2 4d 89 c8 4c 8b 4c 24 08 0f 05 <48> 3d 01 f0 ff ff 73 01 c3 48 c7 c1 e8 ff ff ff f7 d8 64 89 01 48 RSP: 002b:00007fdae76fd028 EFLAGS: 00000246 ORIG_RAX: 0000000000000010 RAX: ffffffffffffffda RBX: 00007fdae9746090 RCX: 00007fdae94cc629 RDX: 0000000000000000 RSI: 000000000000ab03 RDI: 0000000000000004 RBP: 00007fdae9562b39 R08: 0000000000000000 R09: 0000000000000000 R10: 0000000000000000 R11: 0000000000000246 R12: 0000000000000000 R13: 00007fdae9746128 R14: 00007fdae9746090 R15: 00007ffcbe2611a8 block nbd0: shutting down sockets