单队列对性能影响最大的问题来源于request queue的一把大锁, 在cpu多的时候竞争会非常激烈, 特别是在多numa node的情况, 因为锁竞争导致cache line在不同node间bounce对性能影响很大. 论文里也提到硬件中断等问题这里不深入. blk-mq针对上面的问题给的解决方案核心点是将原来的request queue拆成了2部分 一是
AI代码解释 -blockdev node-name=drive_src1,driver=virtio-blk-vhost-vdpa,path=/dev/vhost-vdpa-0,cache.direct=on \-device virtio-blk-pci,id=src1,bootindex=2,drive=drive_src1 \ 无论如何,为了充分利用 vDPA 硬件设备的性能,我们始终可以使用QEMU 提供的通用设备vhost-vdpa-device-pci,该设备支持任何...
return writeback; } static void virtblk_update_cache_mode(struct virtio_device *vdev) { u8 writeback = virtblk_get_cache_mode(vdev); struct virtio_blk *vblk = vdev->priv; blk_queue_write_cache(vblk->disk->queue, writeback, false); revalidate_disk(vblk->disk); } ...
* Together with queue_head for cacheline sharing */structlist_headqueue_head;structrequest*last_merge;structelevator_queue*elevator;structblk_queue_stats*stats;structrq_qos*rq_qos;make_request_fn *make_request_fn; dma_drain_needed_fn *dma_drain_needed;conststructblk_mq_ops*mq_ops;/* sw qu...
当虚拟机里的guest OS执行IO操作时,首先把IO请求放入到virtio-queue里面,然后执行一个kick动作。这个kick动作就是一个对PCIe寄存器的写入操作: 但是虚拟设备是没有PCIe硬件存在的,这个写入操作引发VM_EXIT,进入到KVM的代码,KVM之后通过eventfd通知qemu,触发qemu的主事件循环里处理: ...
GitHub Copilot Write better code with AI Security Find and fix vulnerabilities Actions Automate any workflow Codespaces Instant dev environments Issues Plan and track work Code Review Manage code changes Discussions Collaborate outside of code Code Search Find more, search less Explore All...
Then, we set the 64-bit physical addresses of each array in the queue, and set the length of the queue in the QueueNum register. Finally, we write 1 into QueueReady, telling the device that it can start using the queue. 由于一个设备可能不止一个 virtqueue,因此它们的索引从 0 开始。
blk_queue_virt_boundary blk_queue_write_cache blk_register_region blk_rq_map_kern blk_rq_map_sg blk_rq_map_user blk_rq_map_user_iov blk_rq_unmap_user blk_set_queue_dying blk_set_stacking_limits blk_start_plug blk_status_to_errno blk_sync_queue blk_unregister_region...
blk_queue_exit(q); WRITE_ONCE(rq->state, MQ_RQ_IDLE); if (refcount_dec_and_test(&rq->ref)) __blk_mq_free_request(rq); } EXPORT_SYMBOL_GPL(blk_mq_free_request); @@ -547,8 +558,9 @@ static void __blk_mq_complete_request(struct request *rq) bool shared = false; int cpu...
blk_init_queue_node(request_fn_proc *rfn, spinlock_t *lock, int node_id) { struct request_queue *uninit_q, *q; uninit_q = blk_alloc_queue_node(GFP_KERNEL, node_id); if (!uninit_q) return NULL; q = blk_init_allocated_queue(uninit_q, rfn, lock); ...