RIT’s Cory Merkel joins peers in definitive document on advances in high tech processing
Scott Hamilton/RIT
Cory Merkel was among the researchers that contributed to a new article in ‘Nature’ magazine related to the advances in neuromorphic computer, one of his areas of expertise.
A human’s way of processing information can be used as a model to train next-generation artificial intelligence (AI) systems, according to research published Jan. 22 in Nature.
Cory Merkel, an associate professor of computer engineering at Rochester Institute of Technology, was one of more than a dozen researchers from around the world who contributed to the findings.
Merkel is an expert at using brain-inspired processes called neuromorphic computing to develop solutions that will improve processing power and energy consumption for AI applications.
“The ability to have efficient AI on constrained devices will also open the door to many new application domains in areas like brain-computer interfacing, space exploration, health monitoring technologies, and autonomous surveillance systems, for example,” said Merkel.
His work in neuromorphic computing processes will address the growing market for AI systems in size, weight, and power-constrained applications, such as wearable technology, mobile phones, robots, unmanned aerial vehicles, and satellites. It will significantly improve processing and mass storage requirements.
Lead author Dhireesha Kudithipudi, professor and founding director of the Neuromorphic Artificial Intelligence Lab at the University of Texas-San Antonio, brought together researchers from academia, national laboratories, and industry to provide a comprehensive review of neuromorphic computing technology.
According to article authors, neuromorphic designers apply the principles of bio-intelligence discovered by neuroscientists to design efficient computational systems. These applications require more robust computing intelligence, and the human brain is providing a model of how complex networks can work faster and better.
Merkel and Suma George Cardwell, principal member of the technical staff at Sandia National Laboratory’s Center for Computing Research, detailed in the new paper the role of emerging memory, such as RRAM and Spintronics, needed for mass storage devices. Both emerging technologies are appealing for neuromorphic computing systems at scale, and the authors provide examples of using these devices in learning and how to use or offset device variabilities.
As the electricity consumption of AI is projected to double by 2026, neuromorphic computing emerges as a promising solution. The authors also say that neuromorphic systems are reaching a “critical juncture,” with scale being a key metric to track the progress of the field.
Kudithipudi and Merkel have been long-time collaborators from when she was a professor in RIT’s Kate Gleason College of Engineering. As director of the Brain Lab in the Kate Gleason college, Merkel continues his focus on scaling up deep learning models through neuromorphic computing.
Authored by: Michelle Cometa