- This event has passed.
Straintronics: Manipulating nanomagnets with strain for causal intelligence
October 14 @ 7:00 pm - 8:00 pm
Artificial intelligence (AI) is ubiquitous (self-driving cars, smart appliances, health monitoring). Estimates by OpenAI predict explosive growth of computational requirements associated with AI by a factor of 100× every two years, which is a 50×faster rate than Moore’s law governing the evolution of the chip industry. As AI becomes increasingly reliant on deep learning neural networks (DNN), energy efficient hardware assumes a position of paramount importance. Present day AI tends to dissipate an enormous amount of energy for training and inference (300 Google searches consume enough energy to boil 1 liter of water at room temperature). Against this backdrop, there is a serious desire to identify a technology that can reduce energy consumption dramatically in DNNs. A promising candidate for such a technology is “straintronics” which relies on the manipulation of magnetic states in magnetostrictive nanomagnets via electrically-generated strain to elicit myriad non-Boolean computing activities, such as in DNN. The energy-delay product associated with switching a nanomagnet’s magnetic state using strain is ~10^-27 J-s at room temperature, which is one order of magnitude lower than that associated with switching a modern day FINFET, and more than three orders of magnitude lower than that associated with switching magnetization with spin-orbit torques or spin transfer torques in STT-RAM. Our collaborators and we have developed many constructs for processing and communicating information with straintronics for the purpose of AI. They include neurons and synapses dissipating miniscule amount of energy, compact restricted Boltzmann machines for image classification, ternary content addressable memory with drastically reduced footprint, hardware accelerators for image processing, Bayesian inference engines, correlators/anti-correlators for probabilistic bits, bit comparators for cyber-security applications, analog computing, and (non-volatile) matrix multipliers for machine learning. This talk will describe some of these advances. Boise, Idaho, United States, Virtual: https://events.vtools.ieee.org/m/284139