Paper《Addition is Most You Need: Efficient Floating-Point SRAM Compute-in-Memory by Harnessing Mantissa Addition》存内计算在高效加速机器学习任务方面具有巨大潜力。在众多存储器件中,SRAM因其在数字领域的卓越可靠性和优秀的可扩展性而脱颖而出。近年来,加速浮点DNN(深度神经网络)的SRAM CIM引起了越来越多的...
Realizing increasingly complex artificial intelligence (AI) functionalities directly on edge devices calls for unprecedented energy efficiency of edge hardware. Compute-in-memory (CIM) based on resistive random-access memory (RRAM)1promises to meet such demand by storing AI model weights in dense, anal...
In certain aspects, the computation circuit comprises a counter, an NMOS transistor coupled to the memory cell, and a PMOS transistor coupled to the memory cell, drains of the NMOS and PMOS transistors being coupled to the counter.XIA LI...
In: ACM/IEEE design automation conference (DAC) Google Scholar Huang S, Jiang H, Peng X, Li W, Yu S (2020a) XOR-CIM: compute-in-memory SRAM architecture with embedded XOR encryption. In: IEEE/ACM international conference on computer-aided design (ICCAD) Google Scholar Huang S, Sun ...
BUAA-CI-LAB / Literatures-on-SRAM-based-CIM Star 8 Code Issues Pull requests A reading list for SRAM-based Compute-In-Memory (CIM) research. accelerator in-memory sram reading-list pim literature cim paper-list process-in-memory compute-in-memory Updated Apr 25, 2024 TimurIbrayev /...
Non-volatile computing-in-memory (nvCIM) architecture can reduce the latency and energy consumption of artificial intelligence computation by minimizing the movement of data between the processor and memory. However, artificial intelligence edge devices with high inference accuracy require large-capacity nv...
算子融合(Operator Fusion)是解决 memory bandwidth bound 的常用方法,也是深度学习编译器(NVFuser、XLA等)中最重要的优化手段。其核心思路是将多个算子的计算融合进入一次数据在 DRAM 和 SRAM 的往返之间。 比如说,下面是对 x 计算两次 cos 函数的两种方式: ...
COMPUTE-IN-MEMORY (CIM) BINARY MULTIPLIER 优质文献 相似文献A 1-16b Reconfigurable 80Kb 7T SRAM-Based Digital Near-Memory Computing Macro for Processing Neural Networks This work introduces a digital SRAM-based near-memory compute macro for DNN inference, improving on-chip weight memory capacity and...
Compute-in-memory (CIM) accelerators based on emerging memory devices are of potential use in edge artificial intelligence and machine learning applications due to their power and performance capabilities. However, the privacy and security of CIM accelerators needs to be ensured before their widespread...
The highly-sparse spike-based computations in such spatio-temporal data can be leveraged for energy-efficiency. However, the membrane potential incurs additional memory access bottlenecks in current SNN hardware. To that effect, we propose a 10T-SRAM compute-in-memory (CIM) macro, specifically ...