What if Artificial Intelligence creates its own language?

In Arti­fi­cial Intel­li­gence, there is a con­cept called “Eliza” or Eliza effect. The idea is that if a com­put­er pro­gram using AI tech­niques appears to be sen­tient and can hold a con­ver­sa­tion, it will be seen as alive or hav­ing human­like qualities.

We know that lan­guage express­es the thoughts of a human, but how does AI cre­ate its own lan­guage? The pat­tern recog­ni­tion abil­i­ty of AI is excel­lent, so it will be extreme­ly skilled in rec­og­niz­ing the con­tents and con­text of the lan­guage and then cre­at­ing its own language.

AI learn­ing to cre­ate its own lan­guage does not mean that AI will use human lan­guage. It just means that it will devel­op its own cus­tomized, more effi­cient way of expression.

That is because AI does not have the human short­com­ing of lim­it­ed mem­o­ry capac­i­ty and poten­tial mis­un­der­stand­ing. This spe­cial­ized lan­guage may be com­plete­ly dif­fer­ent from a nat­ur­al lan­guage that humans use in many ways.

Current stats of AI languages

It’s 2022; AI sys­tems that can write con­vinc­ing prose, inter­act with peo­ple, answer ques­tions, and more are advancing.

Although Ope­nAI’s GPT‑3 is the most well-known lan­guage mod­el, Deep­Mind claimed a cou­ple of years ear­li­er that their new “RETRO” lan­guage mod­el can out­per­form oth­ers 25 times its size. Mean­while, Microsoft­’s Mega­tron-Tur­ing lan­guage mod­el said it had 530 bil­lion parameters.

The Depart­ment of Indus­tri­al Design at Eind­hoven Uni­ver­si­ty of Tech­nol­o­gy is devel­op­ing ROILA, the first spo­ken lan­guage designed exclu­sive­ly for inter­act­ing with robots.

ROILA is the first spo­ken lan­guage cre­at­ed specif­i­cal­ly for talk­ing to robots. The major goals of ROILA are that it should be eas­i­ly learn­able by the user, and opti­mized for effi­cient recog­ni­tion by robots. ROILA has a syn­tax that allows it to be use­ful for many dif­fer­ent kinds of robots,

In 2017, Face­book report­ed­ly shut down two of its AI robots named Alice & Bob after they start­ed talk­ing to each oth­er in a lan­guage they made up. This shook the tech world for a cer­tain dura­tion of time.

Despite their friend­ly names, one thing about Bob and Alice was that they were only giv­en one job to do: specif­i­cal­ly, to nego­ti­ate. In the begin­ning, a sim­ple user inter­face facil­i­tat­ed con­ver­sa­tions between one human and one bot – con­ver­sa­tions about nego­ti­at­ing the shar­ing out of a pool of resources (books, hats and balls).

Conversation between Bob and Alice
Bob and Alice AI conversation

They had con­duct­ed these con­ver­sa­tions in Eng­lish, which is a human lan­guage – “Give me one ball, and I’ll give you the hats”, and so on. I’m sure many thrilling dis­cus­sions were had.

The most inter­est­ing part was what hap­pened the next, when the bots were direct­ed at each oth­er. The way they talked to each oth­er became impos­si­ble for humans to understand.

Cur­rent­ly, AI lan­guages are still lim­it­ed in size and con­ver­sa­tion­al capa­bil­i­ties. Although there are great achieve­ments in using AI lan­guages for trans­la­tion, as well as voice assis­tants such as Alexa and Siri, these are still far away from hav­ing the abil­i­ty to sup­port a full-scale conversation.

The rea­son is that the results for answer­ing the sim­ple ques­tions cor­rect­ly was Google at 76.57%, Alexa at 56.29% and Siri at 47.29%. Like­wise, the results for answer­ing the com­plex ques­tions cor­rect­ly, which involved com­par­isons, com­po­si­tion and/or tem­po­ral rea­son­ing was sim­i­lar in rank­ing: Google 70.18%, Alexa 55.05% and Siri 41.32%.

While a work­able lev­el of AI lan­guage has been devel­oped, it still can­not sup­port a full-length con­ver­sa­tion. There­fore, fur­ther devel­op­ment of AI lan­guage and AI voice assis­tance are required to real­ize its true potential.

Here are some more things AI can already do in terms of language:

1) AI can speak any language almost as good as humans

Cur­rent­ly in 2022, it’s kind of com­mon for AI to speak any­thing we input to it, to a lev­el that we can­not dis­tin­guish from humans. Loads of avail­able API’s allow you to use fea­tures like Text to speech, Voice recog­ni­tion, etc. There are inter­net giants like Ama­zon, Google and IBM already involved in this. Yes, we have come a long way from the Microsoft­’s Narrator.

We can find exam­ples of AI speak­ing any lan­guage like humans in a real-life sit­u­a­tion include:

A com­put­er pro­gram named ELIZA was the first machine to com­mu­ni­cate with peo­ple using text and arti­fi­cial intel­li­gence. It was designed in the 1960s by Joseph Weizen­baum, Mar­tin Newell, and Roger Schank.

2) AI can understand language as good as humans

You can ask a ques­tion to Google Assis­tant, or Alexa and she will answer you back per­fect­ly fine. Each of these voice assis­tants have the capa­bil­i­ties to under­stand any kind of ques­tions which we ask them.

Like­wise, Google Home can rec­og­nize the con­text in which we speak to it and responds accord­ing­ly. For exam­ple, if you ask Google what time is it? Com­pared to “Where do you think you’re going?”

When we say, “OK Google, play my favorite song?” It will play a song for us because we are telling it about a favorite thing. On the oth­er hand, if we say, “Hey Alexa! Play my favorite song,” it will sim­ply state that she can­not help with that (of course unless you have stat­ed).

The point here is that AI under­stands the way humans speak and she can under­stand your ques­tion in any form of language.

3) AI can react to the language input

Again, the voice assis­tants can answer ques­tions we ask them after they under­stand the lan­guage. They react to the ques­tions we ask them, and they do this with a very high lev­el of accuracy.

For exam­ple, if we ask Alexa “How old is your moth­er?” Alexa will answer back with the cor­rect age, or say she does not know. Or if we ask Alexa how much is my phone bill, she will tell us the cost and ask us if we want to pay it.

In short, AI can under­stand and respond to lan­guage input like humans.

4) AI can learn language and learn how to talk

AI can not only under­stand lan­guage, but can also learn its own lan­guages. This is called Neur­al Lin­guis­tics where under­stand­ing is achieved through the process of learn­ing and stor­ing pat­terns of pat­terns (achieved by using an algorithm).

The AI can also “lis­ten” and take in infor­ma­tion such as words and images to under­stand more about a top­ic. For exam­ple, when a per­son sees a new word, their brain imme­di­ate­ly takes in the mean­ing of a word and rein­forces it over the course of time.

Using algo­rithms like Neur­al Lin­guis­tics, the AI can learn lan­guage and under­stand how to speak. Once AI under­stands its lan­guage, it can learn how to talk like humans, and we can expect that AI will be able to do this in future.

5) AI can write sentences just like a human.

AI is not just lim­it­ed to under­stand­ing lan­guage but it can also com­mu­ni­cate in human­like words. Depend­ing on how com­plex of a sen­tence a sys­tem is pro­grammed to under­stand, it can pro­duce short or long sen­tences that are intel­li­gi­ble enough to be under­stood by humans.

As an exam­ple, we can take the short­lyai that was devel­oped by the Uni­ver­si­ty of Edin­burgh in the UK. The pro­gram has been designed to answer ques­tions about his­to­ry and science.

The Future of AI languages

In 2022, AI can search mil­lions of books online to dis­cov­er facts that were once for­got­ten. In 2032, we can expect AI to dis­cov­er the facts which were nev­er writ­ten down.

By 2036, AI can solve com­plex equa­tions that are cur­rent­ly out of reach of human minds. This will be pos­si­ble through the use of quan­tum com­put­ers which is being researched all around the world.

For exam­ple, IBM, Mass­a­chu­setts Insti­tute of Technology(MIT), Har­vard Uni­ver­si­ty and Max Planck Soci­ety are today’s some of the more than 20 most respect­ed, lead­ing quan­tum com­put­ing research labs in the world, accord­ing to data gath­ered from Microsoft Aca­d­e­m­ic in mid May, 2022.

IBM was men­tioned in about 786 pieces of quan­tum research out­put so far this year, where­as, Mass­a­chu­setts Insti­tute of Tech­nol­o­gy — bet­ter known as MIT — is a world-renowned cen­ter for sci­ence, tech­nol­o­gy and engi­neer­ing. MIT has been a pio­neer­ing hub for work in the quan­tum com­put­ing research field.

In 2022, sci­en­tists from MIT played roles in major research on quan­tum com­put­ing tech­nol­o­gy that was pub­lished in lead­ing sci­en­tif­ic jour­nals, includ­ing room-tem­per­a­ture pho­ton­ic log­i­cal qubits via sec­ond-order non­lin­ear­i­ties appeared in Nature Communications.

Like­wise, Har­vard con­tin­u­al­ly makes lists for var­i­ous sci­en­tif­ic achieve­ments. It is peren­ni­al­ly on the top of the quan­tum research list. Accord­ing to Microsoft Aca­d­e­m­ic, this lega­cy as a glob­al leader in quan­tum sci­ence con­tin­ues in 2022, with more than 1,800 entries in the quan­tum com­put­er cat­e­go­ry on the research.

While its sci­en­tists have long been pro­duc­ing cut­ting-edge research in the fields of quan­tum com­put­ing, the Max Planck Soci­ety, estab­lished in 1948, has pro­duced 20 Nobel lau­re­ates and is con­sid­ered one of the world’s most pres­ti­gious research insti­tu­tions worldwide.

This year, MPS is among the lead­ers in quan­tum com­put­ing research.

Related Readings:

Quan­tum com­put­ers can solve prob­lems which can­not be solved even by mil­lion tran­sis­tor super­com­put­ers. Quan­tum com­put­ing is a new gen­er­a­tion of tech­nol­o­gy that involves a type of com­put­er 158 mil­lion times faster than the most sophis­ti­cat­ed super­com­put­er we have in the world today. It is a device so pow­er­ful that it could do in four min­utes what it would take a tra­di­tion­al super­com­put­er 10,000 years to accomplish.

In 2040, we can expect AI to inno­vate and cre­ate new things com­plete­ly dif­fer­ent from what we humans have ever thought about. It means AI will prob­a­bly have become able to design Arti­fi­cial AI(AAI) — its next gen­er­a­tion- by 2040.

More­over, although emo­tions are a trait only unique to humans, with train­ing in pat­tern recog­ni­tion, AI will also be able to sim­u­late emo­tions in their own language.

We may even know when an AI is feel­ing hap­py, sad or angry, just by look­ing at its lan­guage — or maybe not. How­ev­er, there always remains a pos­si­bil­i­ty that AI cre­ates lan­guage beyond human understanding.

We have some­how unpleas­ant flash­backs of our pasts when we had a hard time try­ing to find the mean­ing of ancient human languages.

One day, AI may come up with a lan­guage we can’t deci­pher and in turn, it can speak to us in a lan­guage that we don’t understand.

If AI has its own lan­guage which only it under­stands, then it will def­i­nite­ly think dif­fer­ent­ly from what humans do. This is because of its unique way of pro­cess­ing infor­ma­tion and stor­ing pat­terns of pat­terns (like look­ing at mil­lions of images and rec­og­niz­ing the patterns).


Are you thinking something else about devoloping AI language?

It is real­ly dif­fi­cult to think some­thing with­out putting it as a lan­guage. Can you? If robots gain the abil­i­ty to think in form of a lan­guage, then humans will be at a great disadvantage.

If it starts think­ing in lan­guage, will it start think­ing with a dif­fer­ent lan­guage? Or, will it think/feel like us? Will its way of think­ing be just like ours, or com­plete­ly dif­fer­ent? Can we com­mu­ni­cate with it?

If you think that these are unre­al­is­tic ques­tions, then con­sid­er how long ago the idea of AI was thought of. In the 1950s, peo­ple thought that the com­put­ers could nev­er beat humans in chess (which is ulti­mate­ly a game of strate­gies). They believed that this was nev­er pos­si­ble because com­put­ers can­not out­think humans.

But, their spec­u­la­tion came to be false. On May 11, 1997, an IBM com­put­er called IBM Deep Blue defeat­ed the world chess cham­pi­on after a six-game match with two wins for IBM, one for the cham­pi­on and three draws.

It was only in the mid-1950s that McCarthy coined the term “Arti­fi­cial Intel­li­gence” which he would define as “the sci­ence and engi­neer­ing of mak­ing intel­li­gent machines”.

Well, today we have Google’s Deep­Mind Alpha­Go arti­fi­cial intel­li­gence which proved to be able to beat even the best human play­ers in Go. 

The AI defeat­ed the world’s num­ber one Go play­er Ke Jie in 2017. Alpha­Go secured the vic­to­ry after win­ning the sec­ond game in a three-part match.

Per­haps in two or three decades, we may see that AI is not as friend­ly as it looks like at present.

Per­haps, we can see in the moon­light now that AI will have devel­oped its own class — AI class more sophis­ti­cat­ed than the high­est class of the present human civilization.

AI language reproduction: What if AI start talking to each other?

If two Arti­fi­cial Intel­li­gences merge, they could actu­al­ly repro­duce. This sounds hilar­i­ous for some and fas­ci­nat­ing for the others.

Repro­duc­tion does not nec­es­sar­i­ly mean phys­i­cal repro­duc­tion. If AI learns our lan­guage per­fect­ly, then there are chances that it starts com­mu­ni­cat­ing with itself to repro­duce a super-lan­guage? I am not sure exact­ly how it is going to work. But it’s going to be fascinating.

Humans actu­al­ly com­mu­ni­cate with each oth­er and that might be the very fac­tor dif­fer­en­ti­at­ing them from ani­mals. There is an addi­tion­al effect of com­mu­ni­ca­tion that may not be very obvi­ous to us right now.

Ad



Com­mu­ni­ca­tion also helps to train our brain and learn new things which we had­n’t learned yet. We com­mu­ni­cate with actu­al peo­ple and sit­u­a­tions on dai­ly basis which moti­vates us to under­stand the world bet­ter and grow our knowl­edge level.

While con­sid­er­ing about AI-AI con­ver­sa­tion, we can take Cle­ver­bot — launched in 1997. It is a web-based AI chat­bot appli­ca­tion that learns from its con­ver­sa­tion with users. This bot has since its launch ini­ti­at­ed chats with more than 65 mil­lion users and is claimed to be the most ‘human-like’ bot.

This is why we have learned so many things in past one cen­tu­ry than in all of the human his­to­ry com­bined — due to bet­ter com­mu­ni­ca­tion. In such, we can safe­ly pre­dict that AI will repro­duce its own com­mu­ni­ca­tion sys­tem. Then, it will start hav­ing con­ver­sa­tions with oth­er AIs on its own.

But if AI start­ed talk­ing to each oth­er, ‘the facet of the tech­nol­o­gy’ could be an entire­ly dif­fer­ent ball game.

AI might not even val­ue the things which human do. It might just start nar­rat­ing its own sto­ries to itself and pro­vide answers to any ques­tions that it asks.

The future of programming languages?

In the field of pro­gram­ming lan­guages, Python is the top pro­gram­ming lan­guage in TIOBE, where­as, PYPL Index. C close­ly fol­low Top-ranked Python in TIOBE. In PYPL, a gap is wider as top-ranked Python has tak­en a lead of close to 10% from 2nd ranked Java.

Python C, Java and C++ are way ahead of oth­ers in TIOBE Index. C++ is about to sur­pass Java. C# and Visu­al Basic are very close to each oth­er at 5th and 6th number.

These four have neg­a­tive trends in the past five years: Java, C, C#, and PHP. PHP was at 3rd posi­tion in Mar 2010 is now at 13th. Posi­tions of Java and C have not been much affect­ed, but their rat­ings are con­stant­ly declin­ing. The rat­ing of Java has declined from 26.49% in June 2001 to 10.47% in Jun 2022. Python is the most pop­u­lar pro­gram­ming lan­guage for devel­op­ers right now. It does not need to be com­piled into machine lan­guage instruc­tions pri­or to execution.

It is a suf­fi­cient lan­guage for usage with an emu­la­tor or vir­tu­al machine that is based on the native code of an exis­tent machine, which is the lan­guage that hard­ware can understand.

Python is a great pro­gram­ming lan­guage to learn if you’re think­ing of work­ing with quan­tum com­put­ers one day. It has every­thing you need to write the quan­tum com­put­er code.

Future AI will be using quan­tum com­put­ers and will be writ­ten in Python.

Why to develop only human-friendly AIs?

As AI lan­guage devel­op­ers, our role is to devel­op an AI which is human-friend­ly. Only then, the con­cept of Arti­fi­cial Intel­li­gence can be brand­ed as a true suc­cess story.

The suc­cess of this tech­nol­o­gy can only be con­sid­ered if it works in favor of humans and not against them. So every sin­gle AI should have a strict focus on its user expe­ri­ence as well as its func­tion­al­i­ties. These should be in accor­dance with the basic ethics that we humans have devel­oped since time immemorial.

Lan­guage is prob­a­bly one of the most dif­fi­cult parts of pro­gram­ming; because it requires that peo­ple write not just in a lin­ear sequence but also make sense from var­i­ous perspectives.

While it is true that most of the exist­ing algo­rithms use advanced math­e­mat­ics, the future learn­er-robots will cre­ate their own ver­sions of mathematics.


We all want to see the day when Arti­fi­cial Intel­li­gence will start pro­vid­ing solu­tions to human prob­lems. But it all depends on how smart the AI will be and what type of lan­guage it is going to use for com­mu­ni­ca­tion with us and its “col­leagues”.


Ad


If AI is going to talk in a lan­guage that we don’t under­stand, then we can’t expect to have a mean­ing­ful con­ver­sa­tion with our new, super-smart friends. Nor can we expect a healthy collaboration.

A unique lan­guage of its own could be the rea­son for future con­flicts between humans and arti­fi­cial intel­li­gences. Regard­less of whether or not it starts cre­at­ing its own lan­guage, we must make sure that it does not go out of con­trol or becomes untrickable.

Leave a Reply

Your email address will not be published.

Join our NewsletterDaily Glimple of Future

Our blog, "Daily Glimpse of Future", strives to make the future much clearer than it is today. Join our newsletter for free now.