AI Chip to Help Robots Learn Like Humans?

Arti­fi­cial in nature, robots have always been lim­it­ed to sim­ple move­ments and com­mand-fol­low­ing until now. But what if robots were able to learn in the same way humans do? It seems flab­ber­gast­ing, but we may have final­ly cracked it. Intel Labs, in col­lab­o­ra­tion with the Ital­ian Insti­tute of Tech­nol­o­gy and the Tech­ni­cal Uni­ver­si­ty of Munich, has devel­oped one of the most notable archi­tec­tures in the field, the Loi­hi neu­ro­mor­phic chip — a new approach to neur­al net­work-based object learning.

It clear­ly seems that learn­ing for robots, much like learn­ing for humans, is a nev­er-end­ing process. We have now achieved some suc­cess in neur­al net­work-based object detec­tion; how­ev­er, the biggest chal­lenge remains fig­ur­ing out how to make machines learn more like humans do. And their abil­i­ty to per­form com­plex tasks like us with­out get­ting fatigued is going nowhere.

Imag­ine a world where robots help doc­tors detect tumors on MRI scans or assist fire­fight­ers find peo­ple trapped inside burn­ing build­ings. Robots would be able to adapt to new sit­u­a­tions and work side-by-side with people.

Loi­hi neu­ro­mor­phic chip is the right step in that direc­tion. By com­bin­ing bio­log­i­cal and arti­fi­cial intel­li­gence, this new chip could bring the next gen­er­a­tion of intel­li­gent sys­tems clos­er to real­i­ty and make arti­fi­cial intel­li­gence more pow­er­ful and ever-learning.

Neural network-based object learning

While object detec­tion is an impor­tant com­put­er vision task used to iden­ti­fy instances of visu­al objects of cer­tain class­es (such as humans, ani­mals, cars, or build­ings) in dig­i­tal images such as pho­tos or video frames, neur­al net­works are a set of algo­rithms that aim to rec­og­nize under­ly­ing rela­tion­ships in a set of data through a process that mim­ics how the human brain functions.

The brain makes some judg­ments quite fast when rec­og­niz­ing hand­writ­ing or facial fea­tures. In the case of facial recog­ni­tion, the brain might start by say­ing, “It is female or male,” for instance.

Neur­al net­works are the foun­da­tion of deep learn­ing algo­rithms. When giv­en input visu­als (such as images or videos), object detec­tion mod­els pro­vide a labeled ver­sion of the visu­als with bound­ing box­es around each cor­re­spond­ing object.

Sev­er­al algo­rithms are being used by deep learn­ing mod­els. No net­work is seen to be flaw­less, although some algo­rithms are bet­ter suit­ed to car­ry out par­tic­u­lar tasks. It’s ben­e­fi­cial to devel­op a thor­ough under­stand­ing of all fun­da­men­tal algo­rithms, such as con­vo­lu­tion­al neur­al net­works (CNNs), recur­rent neur­al net­works (RNNs), gen­er­a­tive adver­sar­i­al net­works (GANs), etc., in order to make the best choices.

First devel­oped in 1988 by Yann LeCun, CNN’s, also known as Con­vNets, con­sist of mul­ti­ple lay­ers and are main­ly used for image pro­cess­ing and object detection.

The one intel has come up with is some­thing new and spe­cial approach to neur­al net­work-based object learning.

The new Loihi neuromorphic chip

Arti­fi­cial Neur­al Net­works are com­posed of lay­ers upon lay­ers of con­nect­ed input and out­put units known as neu­rones. Intel’s Loi­hi neu­ro­mor­phic chip com­pris­es around 130,000 arti­fi­cial neu­rons. The arti­fi­cial neu­rons send infor­ma­tion to each oth­er across a “spik­ing” neur­al net­work (SNN).

Arti­fi­cial neu­rons, also known as nodes in neur­al net­works, which are orga­nized in a man­ner sim­i­lar to that of the human brain, are designed to work sim­i­lar­ly to that organ. Loi­hi chips are par­tic­u­lar­ly good at rapid­ly spot­ting sen­so­ry input like ges­tures, sounds, and even smells.

Using these new mod­els, Intel and its col­lab­o­ra­tors suc­cess­ful­ly demon­strat­ed con­tin­u­al inter­ac­tive learn­ing on Intel’s neu­ro­mor­phic research chip.

Intel believes that neu­ro­mor­phic com­put­ing offers a way to pro­vide exas­cale per­for­mance in a con­struct inspired by how the brain works. The goal of this research is to apply sim­i­lar capa­bil­i­ties to future robots that work in inter­ac­tive set­tings, enabling them to adapt to the unfore­seen and work more nat­u­ral­ly along­side humans.


Intel’s Loi­hi neu­ro­mor­phic research chip is a trail­er of the future where real-life robots are able to learn like humans do, help­ing them get as close to us as possible.

The achieve­ments in the field of AI and robot­ics in the past few years have been hailed as a ‘new indus­tri­al rev­o­lu­tion’. AI is cer­tain­ly gen­er­at­ing a lot of buzz and its scope is increas­ing at an expo­nen­tial rate. 

A week ear­li­er on August 31, Meta, the par­ent com­pa­ny of Face­book, announced that research sci­en­tists in its AI lab have devel­oped AI that can “hear” what someone’s hear­ing, by study­ing their brain­waves.

We are des­tined to a world of all ‘Arti­fi­cials’, and who knows humans were cre­at­ed arti­fi­cial­ly in the first place?

Leave a Reply

Your email address will not be published.

Join our NewsletterDaily Glimple of Future

Our blog, "Daily Glimpse of Future", strives to make the future much clearer than it is today. Join our newsletter for free now.