Papernews
← back

Mixed-Precision Training and Compilation for RRAM-based Computing-in-Memory Accelerators

Jan 29, 2026 by Rebecca Pelke, J. Klein, José Cubero-Cascante, Nils Bosbach, Jan Moritz Joseph, R. Leupers (arXiv.org)

DOI 10.48550/arXiv.2601.21737



We built a mixed-precision training + compilation flow for RRAM CIM that learns quantization configs with RL to squeeze sub-8-bit mappings into single cells and slash MVM cycles; it’s fun because the compiler actually trades accuracy for huge latency wins and in some cases nets a 2.48x speedup for a hair under 0.1% loss.

source S2, openalex



dgfl, 2026