|
← back
Emerging Memory and Device Technologies for Hardware‐Accelerated Model Training and Inference
Jan 29, 2026 by Yoonho Cho, Dae‐won Kim, Yujin Kimh, Jung‐Heum Na, Y. Jeong, Dong-Yeop Lee, Shinhyun Choi (Advanced Electronic Materials)
DOI 10.1002/aelm.202500796
We survey the device-to-system progress on compute‑in‑memory using resistive, phase‑change, ferroelectric, electrochemical and charge‑based memories, distilling what actually moves the needle for training versus inference: endurance and linear multilevel updates for learning, retention and low power for inference, plus the circuit‑architecture tricks that make these devices usable. If you care about breaking the von Neumann memory wall for energy‑efficient edge and on‑device AI, this maps the materials, device physics and system integration routes that look most promising.
source S2, crossref
|