
Stanford University engineers have created a more efficient and flexible AI chip well suited to power AI in tiny edge devices.
The engineers’ chip, called NeuRRAM, is a novel resistive random-access memory (RRAM) chip that innovates on the way current chips process and store data.
AI functionality on tiny edge devices today is limited by the energy a battery can provide. Whilst modern AI chips process and store data in separate locations – a compute unit and a memory unit – this frequent data movement consumes a lot of energy.
NeuRRAM does the AI processing within the memory itself, thereby eliminating the separation between the compute and memory units.
It is about the size of a fingertip and does more work with limited battery power than what current chips can do.
“Having those calculations done on the chip instead of sending information to and from the cloud could enable faster, more secure, cheaper, and more scalable AI going into the future, and give more people access to AI power,” said HS Philip Wong, the Willard R and Inez Kerr Bell Professor at Stanford’s School of Engineering.
“The data movement issue is similar to spending eight hours in commute for a two-hour workday,” added Weier Wan, a recent graduate at Stanford leading this project. “With our chip, we are showing a technology to tackle this challenge.”
The Stanford engineers presented NeuRRAM in a recent article in the journal Nature.
Even though the concept of CIM chips is well established, and the idea of implementing AI computing in RRAM isn’t new, “this is one of the first instances to integrate a lot of memory right onto the neural network chip and present all benchmark results through hardware measurements,” said Wong, who is a co-senior author of the Nature paper.
Right now, NeuRRAM is a physical proof-of-concept but needs more development before it’s ready to be translated into actual edge devices.
However, its combined efficiency, accuracy, and ability to do different tasks showcases the chip’s potential.
“Maybe today it is used to do simple AI tasks such as keyword spotting or human detection, but tomorrow it could enable a whole different user experience. Imagine real-time video analytics combined with speech recognition all within a tiny device,” said Wan. “To realise this, we need to continue improving the design and scaling RRAM to more advanced technology nodes.”

Want to learn more about edge computing from industry leaders? Check out Edge Computing Expo taking place in Amsterdam, California and London.
Explore other upcoming enterprise technology events and webinars powered by TechForge here.