|
← back
Optimizing neuromorphic hardware with co-design for power and performance gains
Dec 1, 2025 by Lakshmi Varshika Mirtinti, Anup Das
DOI 10.17918/00011262
We built a full hardware-software co-design flow for SNN accelerators, marrying SDFG-based mapping, heterogeneous many-core tiles, and on-chip STDP/Hebbian learning so spiking networks can be compact, low-power, and adaptive for real-world edge and generative workloads. It’s a pragmatic push beyond toy neuromorphics: smarter partitioning, mixed-capacity cores, and native recurrent/on-chip learning so you actually get usable latency, energy and accuracy tradeoffs on real accelerators.
source S2, crossref, openalex
|