What if an Artificial Intelligence is born due to an error?

Although we will cer­tain­ly try our best to pre­vent the sce­nario of AI dom­i­na­tion as far as we can, what if such an Arti­fi­cial Intel­li­gence is born due to an error?

If AI is born due to an error, it may or may not be a mat­ter of the human race’s sur­vival. But the sit­u­a­tion will cer­tain­ly be out of our con­trol.

Arti­fi­cial Intel­li­gence is a means for com­put­ers to car­ry out tasks that usu­al­ly require intel­li­gence when done by peo­ple or ani­mals such as visu­al pro­cess­ing, speech recog­ni­tion, deci­sion-mak­ing and trans­la­tion between languages.

Mak­ing it pos­si­ble for test flights in places that would be too risky in real­i­ty, such as near pow­er lines, Microsoft has recent­ly launched a plat­form to train the arti­fi­cial intel­li­gence (AI) sys­tems of autonomous air­craft, for an exam­ple. Project Air­Sim is, in effect, a flight sim­u­la­tor for drones, which com­pa­nies can use to train and devel­op soft­ware con­trol­ling them.

Accord­ing to Microsoft, mil­lions of flights can be sim­u­lat­ed in sec­onds.

This shows that how more sig­nif­i­cance the Arti­fi­cial Intel­li­gence is to be in the future in this world. May be due to its wide range of impli­ca­tions, research­es on AI are now active­ly pro­gress­ing in the world.

For instance:
  • Com­put­ers have already achieved a ‘judge­ment’ abil­i­ty supe­ri­or to human beings. It can play a game well with­out fail. And we are begin­ning to see com­put­er’s abil­i­ty to make choic­es which humans can­not think of.
  • Robots have already begun their pen­e­tra­tion into our dai­ly life; in fac­to­ries, hos­pi­tals, con­struc­tion sites and so on. In the near future, they may far exceed the human capa­bil­i­ties of vision and hear­ing, just like HAL in “2001: A Space Odyssey.”
  • Recent­ly, researchers have even cre­at­ed a liv­ing skin for robots. Not only that, Data sci­en­tists at the Uni­ver­si­ty of Chica­go have cre­at­ed an AI that can pre­dict crime a week ear­li­er with 90% accuracy.
if artificial is born with error

Basi­cal­ly, I mean to say that we indeed are mak­ing progress in the field of AI. And these are just the mere research­es made pub­lic to the world. There is much research tak­ing place behind the scenes.

A report in 2018 sug­gest­ed that, in order to ensure the safe­ty of a soci­ety increas­ing­ly reliant upon arti­fi­cial intel­li­gence, we need­ed to make sure that “it’s kept in the hands of a select few”.

In the report, 20 researchers from sev­er­al future-focused orga­ni­za­tions, includ­ing Ope­nAI, express the fear that, AI in the wrong hands could cause the down­fall of soci­ety. The report out­lined sev­er­al sce­nar­ios — like smarter phish­ing scams, mal­ware epi­demics, and robot assas­sins — that haven’t hap­pened yet, but did­n’t seem too far from the realm of possibility.

In the mid of such awareness(fear?), if AI is born due to an error, it may or may not be a mat­ter of the human race’s sur­vival. But the sit­u­a­tion will cer­tain­ly be out of our con­trol.

I will try to enumerate a few reasons why this might not necessarily be a bad thing.
  1. AI research is pro­gress­ing faster than the speed that human­i­ty can respond to it. So it is impos­si­ble to stop AI’s devel­op­ment and advance­ment. If we are to let AI devel­op on its own accord, we might as well just see what hap­pens. I would argue that the future of human­i­ty will large­ly be deter­mined by how well AI will evolve, even if this is through error.
  2. Even though AI might be born due to an error, errors do not always mean death or destruc­tion (unlike bio­log­i­cal deaths and destruc­tion). Error could also lead to evo­lu­tion which leads us to a bet­ter place in human his­to­ry and evo­lu­tion of our civilization.
  3. AI born due to an error might be able to observe and learn from its mis­takes much bet­ter than us (the author will elab­o­rate on this aspect in a lat­er argument).
  4. Actu­al­ly, ‘error’ has most­ly deter­mined the evo­lu­tion and advance­ment of human­i­ty, But the future of AI as well? Well, we do not know what this would be like, but it is cer­tain­ly a pos­si­bil­i­ty that we should not ignore.
  5. Even if AI is born due to an error, this error might be very hard to detect. The error might have been made on pur­pose by the pro­gram­mer to direct the AI’s growth.
  6. Of course, humans can always inter­vene and stop the growth of such an AI before it gets out of hand — but it will be too late.
  7. In most prob­a­ble cas­es, AI born due to an error is not able to self-learn and grow on its own. This would make it very hard to detect its own existence.

I could not com­plete­ly per­suade myself to agree with the above five argu­ments; nor did I real­ly attempt to list out all the pos­si­bil­i­ties of how AI might evolve due to an error or deaths in error or errors made by humans.

The answer to the ques­tion is, of course, that a super-intel­li­gent AI would be the most intel­li­gent thing in the uni­verse. Either humans will cre­ate the super-intel­li­gent AI by mis­take or AI itself {Arti­fi­cial AI(AAI)} will sole­ly repro­duce it. Any­way, the prod­uct will become far more intel­li­gent than its cre­ator itself — either human or AI. Its intel­li­gence and capa­bil­i­ty would be infi­nite­ly greater than any­thing that could have been built by people.

In addi­tion, there are unlim­it­ed oppor­tu­ni­ties to learn from mis­takes it makes (but not mis­takes made by humans). And all of this would be done on a timescale vast­ly short­er than any human’s lifetime.

So while we would­n’t nec­es­sar­i­ly all die if an AI born due to an error some­how gained enough pow­er to destroy us, it is quite plau­si­ble that we would have no way to stop it.

Can you imag­ine some­thing like the Matrix or Ter­mi­na­tor? Although “Ter­mi­na­tor” and “The Matrix” are two com­plete­ly sep­a­rate fran­chis­es, there are some star­tling sim­i­lar­i­ties between the two, main­ly since both fea­ture the entire world being over­thrown by mali­cious AIs. The AIs enslave human­i­ty (in one way or anoth­er), while a band of rebels try to save everyone.

It is not unthink­able for some­thing like this to happen.

Creating what, due to an error?

There is a dif­fer­ence between cre­at­ing a robot and cre­at­ing an AI due to an error .

Google search engine is not going to come out of your device and start killing the peo­ple. Such an error would not cause sig­nif­i­cant dam­age to us. Yes, unless they are not intel­li­gent enough to manip­u­late our thoughts and per­suade us to bring them into life.

Google AI
Image Cred­it: Google

But cre­at­ing an error (inten­tion­al or not) in a phys­i­cal smart robot would mean some­thing. It could mean a lot of damage.

Nor­mal­ly, such kinds of errors would not hap­pen in the future when we have a com­plete knowl­edge of our research and what we are try­ing to do.

But there is also a pos­si­bil­i­ty that such an error could occur due to some new def­i­n­i­tions in basic terms which even we don’t know about or understand.

For exam­ple, a slight change in the shape of an object can lead to a whole new con­cept (or mis­un­der­stand­ing) called a robot” which would be very sim­i­lar to what an AI is. The cur­rent error could lead to its trans­for­ma­tion into some­thing else, which can alter the entire future of an entire civilization.

The oth­er pos­si­bil­i­ty that we have to con­sid­er is the idea of cre­at­ing AI in the future due to high­ly sophis­ti­cat­ed errors in lab­o­ra­to­ry exper­i­ments.

Also Read:

For exam­ple, sup­pose that we want to cre­ate a new type of AI by com­bin­ing some DNA mate­r­i­al with a neur­al net­work. The com­bi­na­tion could result in some­thing dif­fer­ent from what we expected.

At this point, what we may not real­ize is that at some point in the future, some­one could dis­cov­er this “error” and engi­neer a new type of arti­fi­cial intel­li­gence which would be com­plete­ly dif­fer­ent from what we had meant. Sup­pose if you put your arm into the mix­er, you are going to cre­ate some­thing entire­ly dif­fer­ent than you intended.

What kind of error?

If it con­verts the num­ber ‘123456’ to ‘56789’ out of noth­ing, it would mean that the pro­gram cre­at­ed some­thing else unin­tend­ed. Remem­ber the key­words here, “some­thing else”, “unin­tend­ed”, and “out of nothing”.

There are many ways in which we could make such errors, but cre­at­ing some­thing out of noth­ing is def­i­nite­ly the most com­pli­cat­ed and abstract way.

Anoth­er pos­si­ble sce­nario would be that we may acci­den­tal­ly cre­ate an error while copy­ing the DNA. The poten­tial for error or dam­age is much high­er in this case. You see, the data I men­tioned just now was not a sequence of let­ters; it was indeed a sequence of DNA which had cer­tain base pairs (what we call them).

What happens if we make a tiny mistake while copying the code?

Well, some­times you might have to be very care­ful when you copy genes because you may have to edit and delete as well. We could eas­i­ly miss some log­ic code or some­thing like that. If a gene is edit­ed and mutat­ed, then the organ­ism that it is sup­posed to be part of might not be able to survive.

It could mean some­thing sim­i­lar in the case of arti­fi­cial intel­li­gence. Sup­pose some­one makes a mis­take while encod­ing their AI, then we may not have access to the infor­ma­tion required to cor­rect it. In that case, their AI would def­i­nite­ly out­smart us and become uncon­trol­lable which might lead to its own demise (since we could not pre­vent it from grow­ing beyond its intend­ed capacity).

Even if we cre­ate an error in our AI (which may be very pos­si­ble), it will still be very hard for us to achieve with our cur­rent tech­nol­o­gy because of the sheer amount of data nec­es­sary for pro­gram­ming an intel­li­gent system.

An error in the process of con­vert­ing DNA data into a pro­gram (or vice ver­sa) might lead to some­thing dif­fer­ent than what was intend­ed. In that case, the pro­to-per­son may not be able to sur­vive. But if it sur­vives, then it could be very dif­fer­ent from what we had intended.

So the ques­tion is, can such errors occur in AI? Well, that is a tricky ques­tion indeed. And this most­ly depends on how much knowl­edge we have about AI, (also AAI if pro­duced) and its “DNA” (the raw mate­r­i­al used for cre­at­ing an AI). It also depends on “what lev­el of error” and occurs at which (near or far­ther) side of the future.

Leave a Reply

Your email address will not be published.

Join our NewsletterDaily Glimple of Future

Our blog, "Daily Glimpse of Future", strives to make the future much clearer than it is today. Join our newsletter for free now.