Our possible future with AI; will it help or harm us?

Full Video Link

As our future with Arti­fi­cial intel­li­gence is going to become inevitable, let’s talk about our pos­si­ble future of AI, and whether it is going to help us or harms us. If a robot tries to make a cup of cof­fee and spills it all over, I think that would be real­ly bad.

The advance­ment of Arti­fi­cial Intel­li­gence is grow­ing faster than ever. Recent­ly, researchers have devel­oped an AI capa­ble of pre­dict­ing crimes one week ear­li­er with 90 per­cent accu­ra­cy. The researchers at the Uni­ver­si­ty of Chica­go have devel­oped an algo­rithm. It first looked at Chica­go – where crime spiked by 34% this year thus far ver­sus 2021. Then, they applied the tool to sev­en oth­er major cities, includ­ing Atlanta.

Like­wise, sci­en­tists from the Uni­ver­si­ty of Tokyo have devel­oped a skin-equiv­a­lent coat­ing for robot­ic limbs, with the result­ing tis­sue-engi­neered mate­r­i­al boast­ing water-repel­lant and self-heal­ing prop­er­ties. They have devel­oped the “liv­ing” skin for a robot. What’s next? The next is the future.

From tech-savvy assis­tants to agri­cul­tur­al bots and self-dri­ving cars, these inno­va­tions seem like a wel­comed addi­tion to human life. How­ev­er, the thought of using robots for any­thing besides enter­tain­ment is often met with fear and suspicion.

In real­i­ty though, there are many poten­tial ben­e­fits and harms that could be caused by an intel­li­gent robot­’s inter­ac­tion with humans that can­not be overlooked.

In the past, we’ve had the abil­i­ty to cre­ate robots that were pow­er­ful enough to assist us with chores, but we nev­er devel­oped an accu­rate and effi­cient way of com­mu­ni­cat­ing with them.

Our Future with AI: Will it harm us or help us?

We do not want a future where humans have no roles to play and are obso­lete as they stare down while their robot­ic coun­ter­parts car­ry out the tasks they pre­vi­ous­ly performed.

Ulti­mate­ly, these intel­li­gent robots will have to learn how to coop­er­ate with us in order for us to devel­op an arti­fi­cial intel­li­gence that can be trust­ed and fol­lowed simultaneously.

It’s been said if you don’t believe in the future, then it won’t hap­pen. The future is already here. Today. We live in an era of robot­ics and arti­fi­cial intelligence(AI). And we are just begin­ning to see their poten­tial appli­ca­tions, includ­ing in the field of medicine.

From diag­nos­ing ill­ness and injuries to actu­al­ly per­form­ing surgery, robots have the poten­tial to be life-chang­ing for patients around the world.

We can say the same for an AI. When com­bined with robot­ics, AI can help to enhance the abil­i­ties of a robot and make life much eas­i­er. For exam­ple, we can use AI to rec­og­nize pat­terns in the research data that robots are tak­ing in. It will ana­lyze the research data more quick­ly and effi­cient­ly than a human with­out miss­ing any details.

Will robots harm us?

Some peo­ple believe that these tech­nolo­gies will be harm­ful to humans. Oth­ers believe they will make our lives eas­i­er. What do you think?

Which of the fol­low­ing state­ments best rep­re­sents your opin­ion on how AI and robots will ben­e­fit or harm human life in the future? Why?

(a) Robots and AI may harm human life by tak­ing away jobs from humans, hurt­ing human feel­ings, encour­ag­ing lazi­ness and mak­ing harm­ful deci­sions for us.

(b) Robots and AI may ben­e­fit our lives by help­ing us with work, pos­si­bly mak­ing lives eas­i­er, and pro­tect­ing us from bad decisions.

© AI is not com­ing out of the machine and is start­ing to kill us anyway.

Now, these state­ments may be allud­ing to dif­fer­ent issues, but I would like to tack­le them all togeth­er. The gist is that robots and AI will at some point harm us.

Now, why is this fear so pervasive? And where does it stem from?

To begin with, it is impor­tant to con­sid­er what AI is. Arti­fi­cial intel­li­gence or AI comes to rep­re­sent the con­cept of non-bio­log­i­cal sys­tems that are capa­ble of per­form­ing tasks often asso­ci­at­ed with human intel­li­gence. We have seen a lot of movies in which robots turn out to be real­ly bad guys, caus­ing may­hem and destruc­tion all around. 

For exam­ple, Hal 9000 – 2001: A Space Odyssey, Roy Bat­ty – Blade Run­ner, Skynet – Ter­mi­na­tor. Uni­cron – Trans­form­ers, Sentinels/Machines – The Matrix, are some of the movies in which robots turn out to be real­ly bad guys.

But in real­i­ty, this is not how it’s going to work. Intel­li­gence will be lim­it­ed to machines, forever.

‘Robots will not harm us’ is already true, but will it remain that way? What is need­ed here to ease our wor­ries is to set a con­crete def­i­n­i­tion of robots and AI. Then we need to estab­lish the bound­aries in which they operate.

The first idea that pops up when one thinks of AI is a humanoid robot which auto­mat­i­cal­ly does what­ev­er “we” want it to do. While this may sound fan­ci­ful, it is not impos­si­ble with cur­rent tech­nol­o­gy. And, it will be more real­is­tic as time goes by. So where can we draw the line between real­i­ty and fantasy?

Well, just take Nuclear weapons

Take nuclear weapons for exam­ple. As of 2022, about 12,700 nuclear war­heads are still esti­mat­ed to be in use, of which more than 9,400 are in mil­i­tary stock­piles for use by mis­siles, air­craft, ships and sub­marines. We have made them and they can cause a lot of harm. But have they?

Have we had giant robots attack­ing us in the past? No. And I don’t think it’s going to hap­pen either. The Robots will help us till the far­thest and not harm us if every­thing goes right with the time­line of AI development.

Why AI will not start destroying everything on its own?

Why we should wor­ry is about the fact that humans can mis­use AI. Nature has always put lim­its on every­thing. Just like humans can­not exceed the speed at which a chee­tah can run(A chee­tah can run up to 120 km/h. It runs faster than any oth­er ani­mal on the plan­et. The accel­er­a­tion of a chee­tah, 0–100 km/h in just three sec­onds, is just as incred­i­ble. It reach­es its max­i­mum speed in short bursts.), or how well a bird can fly(The Pere­grine Fal­con is indis­putably the fastest ani­mal in the sky. It has been mea­sured at speeds above 83.3 m/s (186 m/h), but only when stoop­ing, or div­ing.)

So, let’s take a step back. We must know that AI is not going to harm us because it has lim­i­ta­tions and it will always remain that way.

These are the limits in which AI operates:

A) Arti­fi­cial Intel­li­gence is not an enti­ty or a thing but an ensem­ble of com­put­er pro­grams.

“It’s basi­cal­ly soft­ware that we cre­ate with var­i­ous capa­bil­i­ties”. It is basi­cal­ly a col­lec­tion of a lot of pro­grams run­ning con­cur­rent­ly to per­form dif­fer­ent tasks using dif­fer­ent algorithms.

B) These pro­grams are designed to oper­ate with­in a giv­en envi­ron­ment. This envi­ron­ment could be any which way you want it to be, as long as it does­n’t go beyond the bound­aries you set for your program(s).

So, what hap­pens when we pro­gram our AI to car­ry out a task using an algo­rithm that is beyond its capa­bil­i­ties? Before that, you must under­stand that cre­at­ing a brain that can cre­ate bet­ter brains on its own is not in the range of capa­bil­i­ties of a human brain. Rewind that again!

What if the process goes wrong? Will AI turn rogue?

There always remains a pos­si­bil­i­ty that our future time­lines of AI devel­op­ment goes wrong. And this is where the trou­ble starts. The future of AI, when devel­oped in an unchecked man­ner could reach lev­els that are not only harm­ful, but poten­tial­ly dan­ger­ous for the human race.

A very pop­u­lar inter­pre­ta­tion of advanced AI is one that’s extreme­ly intel­li­gent and decides it does­n’t need humans to com­plete its goals. It could be a super-intel­li­gent AI or machine, or just a swarm of nanobots that com­bine and self-repli­cate to form an intel­li­gent swarm that makes its own decisions.

If such AI is born as a result of an error, we will have no con­trol. We can’t stop it. So, what could happen?

There is a pos­si­bil­i­ty of a self-repli­cat­ing super-being that decides it does not need humans any­more. This could be bad news for us, but again this is not impossible.

But there is oth­er 90 pos­si­bil­i­ties which are bright. 90 times out of 100, it seems that our future with AI is bright. Most like­ly, AI will be more like an infi­nite ver­sion of Ama­zon Alexa, rather than a phys­i­cal terminator.

Leave a Reply

Your email address will not be published.

Join our NewsletterDaily Glimple of Future

Our blog, "Daily Glimpse of Future", strives to make the future much clearer than it is today. Join our newsletter for free now.