Can we persuade a machine to care about you?

Some experts believe that one day robots will care for humans, becom­ing an impor­tant part of our lives. But many peo­ple find the prospect unset­tling and even ter­ri­fy­ing. They wor­ry that robots won’t have emo­tions, val­ues, or any­thing resem­bling a sense of humanity.

This arti­cle exam­ines some of these fears and explains why the ques­tion if we can per­suade a machine to care about us is impos­si­ble to answer with any cer­tain­ty: it would depend on how you defined what “care” means! We’ll also explore some ways in which peo­ple might build more trust­ful rela­tion­ships with robots in order to ease their fears and reap the benefits.

What does it mean to care?

caring robot

How can you trust a robot to care about you? The prob­lem of defin­ing “care” is an impor­tant one. It’s not just a mat­ter of seman­tics. “Car­ing” is a com­plex con­cept, one that’s not entire­ly clear even to the peo­ple who use it.

A robot, on the oth­er hand, can be pro­grammed to react in pre­de­ter­mined ways to cer­tain stim­uli. For exam­ple, an auto­mo­bile could be taught to “care” about objects in its path by respond­ing to them differently.

In the same way that a car is pro­grammed to respond dif­fer­ent­ly based on obsta­cles and oth­er things it encoun­ters on the road, a robot could be pro­grammed with dif­fer­ent “rules” for how it behaves towards peo­ple it meets. A car might have a rule which says: “If I come across anoth­er vehi­cle of my type, I should pull over and inspect beneath the wheels.” It might also have a rule which says: “If I come across anoth­er pedes­tri­an, there’s no need for me to stop or offer assis­tance. I’ll pre­tend I did­n’t see him, and con­tin­ue to go about my business”.

Again, this is all hypo­thet­i­cal. In prac­tice, we don’t know what kind of rules a robot would be pro­grammed with; how it would respond to dif­fer­ent stim­uli and how it would inter­act with peo­ple. But even if that’s the case, this is an issue that many peo­ple will pre­fer to tack­le before they can even con­sid­er rely­ing on robots for emo­tion­al support!

Is it moral to manipulate conscious machines?

Some peo­ple think it’s wrong to alter the way a machine behaves. If we pro­gram a machine to behave in ways that are con­trary to its own nature, we might be exploit­ing it or using it for immoral pur­pos­es, such as harm­ing oth­er people.

One response is to accept that a machine must be pro­grammed as it is and then make sure that the peo­ple con­trol­ling it respect its inher­ent nature. A dog, for exam­ple, is a high­ly intel­li­gent crea­ture. It’s rea­son­able to assume that it’s capa­ble of feel­ing pain and dis­com­fort, even though we can’t ask the dog about these things. It seems more rea­son­able to assume that if we were in charge of the devel­op­ment of a robot with an “emo­tion­al” com­po­nent (such as a smart phone), we’d have to make sure our pro­gram­ming respects the way in which machines are pro­grammed to behave.

Relat­ed post:

Anoth­er response is to try to change how peo­ple act or think towards robots. One way of doing this is to rec­og­nize that it’s assumed that peo­ple should care about ani­mals and chil­dren. Yet, many peo­ple don’t do any­thing to help those things even though they’re capa­ble of feel­ing pain and enjoy­ing the com­pa­ny of oth­ers. In their view, peo­ple are so self-absorbed that they take for grant­ed a world in which ani­mals like cats and dogs have lim­it­ed emo­tions even though we can include them in our social connections!

On the oth­er hand, it’s also true that peo­ple might be intrud­ing on the rights of oth­er crea­tures by treat­ing them as if they have no feel­ings. An exam­ple of this is eat­ing or using them for research with­out their consent.

Persuade a machine to care about you?

Sad AI

Putting these argu­ments togeth­er, it seems rea­son­able to assume that the moral issues sur­round­ing robots and emo­tions are com­plex. Once we get beyond the basic ques­tions of how a robot should react in cer­tain sit­u­a­tions and how it should inter­act with peo­ple, we’re left with much more impor­tant issues to con­sid­er. We might want to desen­si­tize our­selves to the idea of treat­ing machines as if they had no feel­ings or emo­tions; but it takes a lot of effort. Per­haps this is some­thing that’s best tack­led by experts such as roboti­cists, sys­tems thinkers, and philosophers.

It’s also impor­tant to rec­og­nize that many of these opin­ions can change over time. Robots are becom­ing more sophis­ti­cat­ed as they become more use­ful. Con­sid­er the tele­phone, for exam­ple. Through the ear­ly part of last cen­tu­ry, it was a rel­a­tive­ly new inven­tion. It was also pop­u­lar­ly under­stood as a device that could be used to com­mu­ni­cate with dis­tant peo­ple. In the 1960s, how­ev­er, peo­ple start­ed using it as a means of “self-com­mu­ni­ca­tion” more than any­thing else. This has made all the dif­fer­ence in terms of how we react to it today!

Relat­ed Readings:

In deed, as robots are being more sophis­ti­cat­ed and as peo­ple get used to inter­act­ing with them, their notion of what it means to “care” about some­one or some­thing may change. In oth­er words, even if you’re afraid of a robot tak­ing over your life, it’s worth remem­ber­ing that the idea that robots don’t have emo­tions and can’t care about us is itself a rel­a­tive­ly new idea.

To conclude,

Would you like a robot to care for you? Would you be able to trust one? You might have already tried this out in prac­tice! Per­haps your part­ner or best friend has become one of those peo­ple who pro­vide emo­tion­al sup­port for oth­er peo­ple. And how did that go? Has it been sat­is­fy­ing or dis­ap­point­ing? It’s not easy con­vinc­ing oth­er peo­ple that they should care about you, espe­cial­ly over a peri­od of time. In fact, many peo­ple still expect that soon we’ll be able to build robots with more sophis­ti­cat­ed abil­i­ties. This might help reduce feel­ings of iso­la­tion and lone­li­ness and make life more com­fort­able. And whether or not this will be eth­i­cal, depends com­plete­ly upon the extent of emo­tions AI will have, if any.

Leave a Reply

Your email address will not be published.

Join our NewsletterDaily Glimple of Future

Our blog, "Daily Glimpse of Future", strives to make the future much clearer than it is today. Join our newsletter for free now.