Artificial in nature, robots have always been limited to simple movements and command-following until now. But what if robots were able to learn in the same way humans do? It seems flabbergasting, but we may have finally cracked it. Intel Labs, in collaboration with the Italian Institute of Technology and the Technical University of Munich, has developed one of the most notable architectures in the field, the Loihi neuromorphic chip — a new approach to neural network-based object learning.

It clearly seems that learning for robots, much like learning for humans, is a never-ending process. We have now achieved some success in neural network-based object detection; however, the biggest challenge remains to figure out how to make machines learn more than humans do. And their ability to perform complex tasks like ours without getting fatigued is going nowhere.

Imagine a world where robots help doctors detect tumors on MRI scans or assist firefighters to find people trapped inside burning buildings. Robots would be able to adapt to new situations and work side-by-side with people.

Loihi neuromorphic chip is the right step in that direction. By combining biological and artificial intelligence, this new chip could bring the next generation of intelligent systems closer to reality and make artificial intelligence more powerful and ever-learning.

Neural network-based object learning

While object detection is an important computer vision task used to identify instances of visual objects of certain classes (such as humans, animals, cars, or buildings) in digital images such as photos or video frames, neural networks are a set of algorithms that aim to recognize underlying relationships in a set of data through a process that mimics how the human brain functions.

The brain makes some judgments quite fast when recognizing handwriting or facial features. In the case of facial recognition, the brain might start by saying, “It is female or male,” for instance.

Neural networks are the foundation of deep learning algorithms. When given input visuals (such as images or videos), object detection models provide a labeled version of the visuals with bounding boxes around each corresponding object.

Several algorithms are being used by deep learning models. No network is seen to be flawless, although some algorithms are better suited to carry out particular tasks. It’s beneficial to develop a thorough understanding of all fundamental algorithms, such as convolutional neural networks (CNNs), recurrent neural networks (RNNs), generative adversarial networks (GANs), etc., in order to make the best choices.

First developed in 1988 by Yann LeCun, CNNs, also known as ConvNets, consist of multiple layers and are mainly used for image processing and object detection.

The one intel has come up with is something new and special approach to neural network-based object learning.

The new Loihi neuromorphic chip

Artificial Neural Networks are composed of layers upon layers of connected input and output units known as neurons. Intel’s Loihi neuromorphic chip comprises around 130,000 artificial neurons. The artificial neurons send information to each other across a “spiking” neural network (SNN).

Artificial neurons, also known as nodes in neural networks, which are organized in a manner similar to that of the human brain, are designed to work similarly to that organ. Loihi chips are particularly good at rapidly spotting sensory input like gestures, sounds, and even smells.

Using these new models, Intel and its collaborators successfully demonstrated continual interactive learning on Intel’s neuromorphic research chip.

Intel believes that neuromorphic computing offers a way to provide exascale performance in a construct inspired by how the brain works. The goal of this research is to apply similar capabilities to future robots that work in interactive settings, enabling them to adapt to the unforeseen and work more naturally alongside humans.

Intel’s Loihi neuromorphic research chip is a trailer for the future where real-life robots are able to learn like humans do, helping them get as close to us as possible.


The achievements in the field of AI and robotics in the past few years have been hailed as a ‘new industrial revolution. AI is certainly generating a lot of buzzes and its scope is increasing at an exponential rate. A week earlier on August 31, Meta, the parent company of Facebook, announced that research scientists in its AI lab have developed AI that can “hear” what someone’s hearing, by studying their brainwaves. We are destined for a world of all ‘Artificials’, and who knows if humans were created artificially in the first place?

Latest posts by Britney Foster (see all)
Previous post Researchers develop AIs to improve gamers’ dynamic difficulty adjustment
Next post AI will reach human intelligence, not imitate it