What is it about?

Computation-in-Memory (CIM) is an emerging computing paradigm to address memory bottleneck challenges in computer architecture. A CIM unit cannot fully replace a general-purpose processor. Still, it significantly reduces the amount of data transfer between a traditional memory unit and the processor by enriching the transferred information. Data transactions between processor and memory consist of memory access addresses and values. While the main focus in the field of in-memory computing is to apply computations on the content of the memory (values), the importance of CPU-CIM address transactions and calculations for generating the sequence of access addresses for data-dominated applications is generally overlooked. However, the number of information transactions used for “address” can easily be even more than half of the total transferred bits in many applications. In this paper, we propose a circuit to perform the in-memory Address Calculation Accelerator (ACA). Our simulation results showed that calculating address sequences inside the memory (instead of the CPU) can significantly reduce the CPU-CIM address transactions and therefore contribute to considerable energy saving, latency, and bus traffic. For a chosen application of guided image filtering, in-memory address calculation results in almost two orders of magnitude reduction in address transactions over the memory bus.

Featured Image

Why is it important?

The innovative address calculation accelerator or ACA not only reduces the total energy consumption of the address BUS but also improves performance by accelerating an important part of the address calculations inside the CIM unit. Additionally, ACA enables more parallel data access than conventional memory access solutions because it can read/write multiple non-consecutive data segments in a single cycle.

Read the Original

This page is a summary of: Energy-efficient In-Memory Address Calculation, ACM Transactions on Architecture and Code Optimization, December 2022, ACM (Association for Computing Machinery),
DOI: 10.1145/3546071.
You can read the full text:

Read

Contributors

The following have contributed to this page