如下调用dev_alloc_pages 分配一个page,再调用 dma_map_page_attrs 将该页映射dma 地址,最后调用dma_sync_single_range_for_device 将页面的数据同步给设备访问: struct page *page; dma_addr_t dma; struct device *dev; /* device for DMA mapping */ int page_size = PAGE_SIZE; /* alloc new page...
dma_sync_single_for_cpu、dma_sync_single_for_device、dma_sync_sg_for_cpu 和 dma_sync_sg_for_device voiddma_sync_single_for_cpu(structdevice *dev,dma_addr_tdma_handle,size_tsize,enumdma_data_direction direction) voiddma_sync_single_for_device(structdevice *dev,dma_addr_tdma_handle,size_...
void dma_sync_single_for_cpu(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction direction)void dma_sync_single_for_device(struct device *dev, dma_addr_t dma_handle, size_t size, enum dma_data_direction direction)void dma_sync_sg_for_cpu(struct device *d...
dma_unmap_single(dev, dma_handle, size, direction); dma_sync_sg_for_cpu() dma_sync_sg_for_device() 流式DMA映射对于CPU何时可以操作DMA缓冲区有严格的要求,只能等到dma_unmap_single后CPU才可以操作该缓冲区。 究其原因,是因为流式DMA缓冲区是cached,在map时刷了下cache,在设备DMA完成unmap时再刷cache...
.sync_single_for_cpu = iommu_dma_sync_single_for_cpu, .sync_single_for_device = iommu_dma_sync_single_for_device, .sync_sg_for_cpu = iommu_dma_sync_sg_for_cpu, .sync_sg_for_device = iommu_dma_sync_sg_for_device, .map_resource = iommu_dma_map_resource, ...
dma_sync_single_for_cpu(xskb->pool->dev, xskb->dma, xskb->pool->frame_len, DMA_BIDIRECTIONAL); } void xp_dma_sync_for_device_slow(struct xsk_buff_pool *pool, dma_addr_t dma, size_t size); static inline void xp_dma_sync_for_device(struct xsk_buff_pool *pool, dma_addr_t dm...
*@tlb_range_add: Add a given iova range to the flush queue for this domain *@tlb_sync: Flush all queued ranges from the hardware TLBs and empty flush * queue *@iova_to_phys: translate iova to physical address *@add_device: add device to iommu grouping ...
3.4.3 dma_unmap_single 8 概述 由于处理器存在cache,cache和内存中数据可能不一致,所以驱动在使用dma在内存和device之间搬移数据前后需要cpu对cache和内存中数据进行同步。有些dma寻址能力有限,比如只能寻址内存低128m,但数据在内存的1G地址处,这时需要进行数据转移。
Returns %true if dma_sync_single_for_{device,cpu} calls are required to transfer memory ownership. Returns %false if those calls can be skipped. 如果需要调用 dma_sync_single_for_{device,cpu} 来传输内存所有权,则返回 %true。如果可以跳过这些调用,则返回 %false。
void pci_dac_dma_sync_single_for_device(struct pci_dev *pdev, dma64_addr_t dma_addr, size_t len, int direction); ISA设备的DMA ISA总线同意两种DMA传输:本地(native)DMA和ISA总线控制(bus-master)DMA 本地DMA使用主板上的标准DMA控制器电路来驱动ISA总线上的信号线 ...