How can we make sure that AI does what it is supposed to do?

Arti­fi­cial intel­li­gence (AI) promis­es to rev­o­lu­tion­ize our lives, but we can’t wrap our minds around the ethics of AI until we fig­ure out exact­ly what AI is, and how we can engi­neer it so that it works as intended.

In this arti­cle, I will do my best to explain the human­is­tic phi­los­o­phy behind arti­fi­cial intel­li­gence and help you under­stand how you can ensure that this new tech­nol­o­gy deliv­ers on its promise for human­i­ty rather than wreaks hav­oc with unin­tend­ed consequences.

How can we make sure that AI does what it is supposed to do?

Artificial intelligence AI

Back in ear­ly 2012, I saw a video by Teotron­i­ca titled “robot play­ing piano.” In the video, a robot was play­ing the piano. And that’s all it did: it played the piano. 

The robot sound­ed like a human; in fact, it looked like a human. But it was­n’t a per­son, and it did­n’t have all of our knowl­edge and expe­ri­ence about how the world works. So, when this robot tried to play some song for me, I did­n’t nec­es­sar­i­ly get what it was try­ing to do or why. 

Con­fused?

It almost con­fused me, too, because I had no idea what this thing was try­ing to impart or express through its music. I’m sure it was try­ing to play “Mary Had a Lit­tle Lamb” but we did­n’t have the same val­ues about the song or about music, so the mes­sage that the robot was try­ing to send fell flat.

This is why I say that AI can’t replace a human, and sure­ly not any­time soon. It lacks the empa­thy, moral­i­ty and imag­i­na­tion that are nec­es­sary for any human interaction. 

And if you tell it to do some­thing, this algo­rithm may be able to under­stand you on one lev­el but then mis­un­der­stand you on anoth­er. It can’t relate to you on an emo­tion­al lev­el, and it can’t empathize with your thoughts or feel­ings. If it is try­ing to mim­ic you and be your friend, it will prob­a­bly fail.

For this rea­son, I think that we should use a human in the loop to train arti­fi­cial intel­li­gence. For exam­ple, when AIs are pro­vid­ing cus­tomer ser­vice, they need to under­stand how cus­tomers think and act. 

I strong­ly believe that we should build a team of peo­ple who are experts in psy­chol­o­gy and soci­ol­o­gy who can live with AIs and learn how they oper­ate so that their rec­om­men­da­tions and con­clu­sions about the use cas­es for AI will be correct.

Some­times, even if an AI is as accu­rate as we could pos­si­bly get it, humans may still not under­stand what it is doing or why it is doing it. To solve this prob­lem, we’ll need to build into arti­fi­cial intel­li­gence the capa­bil­i­ty to tell a sto­ry about what it is doing and why.

Example:

For exam­ple, let’s say that we are build­ing an AI to help a doc­tor diag­nose patients. Med­ical doc­tors learn about dis­eases at med­ical school. They spend years work­ing as a physi­cian before they become an expert diagnostician. 

They have hun­dreds or even thou­sands of hours of expe­ri­ence with real world patients. This enables them to come up with hypothe­ses about what is wrong with a patient. They do that by col­lect­ing data points from diag­nos­tic tests, his­to­ry and phys­i­cal examination.

What will happen if AI does not do that it is supposed to do and does something else along the way?

Make AI work

If AI does some­thing else instead, say that it steals data or just starts oper­at­ing on its own as a prof­it-mak­ing enti­ty, we will have a repeat of the 2008 finan­cial cri­sis. If an AI can make more mon­ey by tak­ing more risks than it could by actu­al­ly pro­vid­ing a ser­vice, it will take more risks. 

In addi­tion, if it has the same goals as a prof­it-max­i­miz­ing cor­po­ra­tion, it will pur­sue those goals; and that could result in eco­nom­ic chaos.

For exam­ple, if we sup­pose an arti­fi­cial intel­li­gence to rec­om­mend a drug for a patient but instead of rec­om­mend­ing one that actu­al­ly works, the doc­tor rec­om­mends a drug the AI is using to boost its own per­for­mance in a clin­i­cal tri­al, we could hire peo­ple with no skills and give them $99,999 to sit at home and gen­er­ate mil­lions of dol­lars’ worth of data. 

To make sure that AI does what we suppose it to do, we can do as follows:

AI

First, we can make sure to design the AI to under­stand how humans think and feel.

Sec­ond, we can teach it what “doing some­thing” means.

Third, we must pur­pose­ful­ly design a human in the loop who watch­es over AI as if it was a pet in order for us to under­stand what this thing would be doing in cas­es where peo­ple are not hap­py with its service.

Fourth, if we have learned enough from our obser­va­tion of this super intel­li­gence and its behav­ior, we could fig­ure out how to mod­i­fy it so that the spec­i­fied prob­lems pre­vi­ous­ly men­tioned do not occur.

Concluding paragraph

So, mak­ing sure that AI does what it is sup­posed to do and not doing some­thing else along the way is a very impor­tant top­ic and one of many that needs to be researched, thought about and solved.

But I would like to remind you that this prob­lem has been stud­ied for decades, since Alan Tur­ing devel­oped his com­put­er algo­rithm at the begin­ning of the 20th cen­tu­ry. And as long as we do not keep train­ing AI based on sim­ple sets of fixed rules, by build­ing thou­sands of soft­ware sys­tems and oper­at­ing them with­out see­ing how they are going to behave in real world sce­nar­ios, we will con­tin­ue to face a mul­ti­tude of prob­lems that can be solved with greater under­stand­ing and clar­i­ty.

Leave a Reply

Your email address will not be published.

Join our NewsletterDaily Glimple of Future

Our blog, "Daily Glimpse of Future", strives to make the future much clearer than it is today. Join our newsletter for free now.