|
← back
Investigating Energy Bounds of Analog Compute-in-Memory with Local Normalization
Feb 8, 2026 by Brian Rojkov, Shubham Ranjan, Derek Wright, Manoj Sachdev
Modern edge AI workloads demand maximum energy efficiency, motivating the pursuit of analog Compute-in-Memory (CIM) architectures. Simultaneously, the popularity of Large-Language-Models (LLMs) drives the adoption of low-bit floating-point formats which prioritize dynamic range. However, the conventional direct-accumulation CIM accommodates floating-points by normalizing them to a shared widened fixed-point scale.
|