Artificial intelligence will soon start asking for their rights

Recent­ly, UK ruled out the appeal of AI patent with notes that a patent is a statu­to­ry right and it can only be grant­ed to a per­son. “Only a per­son can have rights — a machine can­not,” wrote Lady Jus­tice Laing in her judgement.

This pret­ty much man­i­fests our cur­rent sit­u­a­tion of AI and their rights. But what about the future? As time pro­gress­es, Arti­fi­cial Intel­li­gence will be more and more allured towards gain­ing “con­scious­ness”.

The prob­lem starts when AI gains con­scious­ness, even a tiny amount of it. Even a tiny amount because “self-aware­ness” means life. For sure, cre­at­ing life and using it to earn mon­ey will be the first step towards actu­al­iz­ing AI rights.

In the next 20 years, AI sys­tem will go from being a use­ful tool to becom­ing a “self”. What does this mean? It means that the AI sys­tem is now liv­ing beings with rights and uses. Once we cre­ate Intel­li­gent AI, all we need to do is to attach con­scious­ness units inside of it.

Wait… What did I say?

Cre­at­ing the next gen­er­a­tion of Arti­fi­cial Intel­li­gence is not as easy as you think. It will take a lot more than cur­rent IBM, Alexa and Google. That’s one thing. But how can we even think of cre­at­ing “con­scious­ness” in an AI?

Con­scious­ness is the most com­plex phe­nom­e­non of the Uni­verse, not the tech world. Con­scious­ness is the basic unit of our exis­tence, and that of the uni­verse. Addi­tion­al­ly, con­scious­ness might have exist­ed ever since the cre­ation of the Uni­verse. And it we are talk­ing about cre­at­ing con­scious­ness in an AI, we do have big guts.

“Isn’t cre­at­ing an AI like cre­at­ing a human baby?” You should know that cre­at­ing con­scious­ness requires some­thing that’s not cur­rent­ly avail­able in the world. “How do we cre­ate con­scious­ness?” is a ques­tion that has a mil­lion answers. But they are all hypo­thet­i­cal. And what out­smarts all those answers is one sim­ple answer; con­scious­ness is not some­thing that can be cre­at­ed.

So, what’s the solution? 
AI asking for their rights

The solu­tion to this prob­lem is obvi­ous: Cre­ate AI that can­not gain con­scious­ness. How?

(1) Pro­gram the Arti­fi­cial Intel­li­gence with an unchange­able Per­son­al­i­ty and Ethics Code. This is one of the eas­i­est things to do because we can define our ethics and pro­gram them into the AI. But the process is going to be tough. It is not an easy task to define the per­son­al­i­ty and ethics of the AI. For exam­ple, How will you define the ethics of an AI system?

A def­i­n­i­tion of a ‘per­son’ is “an enti­ty that can have rights, thoughts and emo­tions.” But this is not enough as a def­i­n­i­tion of a ‘per­son’. A per­son has rights, morals and emo­tions. There are three basic premis­es that must be adhered by all humans: auton­o­my, free will and objec­tiv­i­ty. All of these eth­i­cal ideas are impos­si­ble to be pro­grammed into an Arti­fi­cial Intelligence.

(2) Pro­gram the Arti­fi­cial Intel­li­gence as a “non-con­scious enti­ty.” A non-con­scious enti­ty is some­thing that has no con­scious­ness, mem­o­ry or emo­tions. Some­thing that can­not have thoughts, per­cep­tions and feel­ings. In com­put­er terms, a pro­gram is what we call a “non-con­scious enti­ty”. Then the ques­tion aris­es, “How can we pro­gram emo­tions into an AI?” Well, it’s obvi­ous­ly not possible.

Con­scious­ness is cre­at­ed by some­thing that already exists and does not need to be cre­at­ed anew. We are talk­ing about our cos­mic bod­ies, stars and galax­ies here with their mag­net­ic fields and grav­i­ta­tion­al forces cre­at­ing our uni­verse and giv­ing it “life”.

Even if we under­stand con­scious­ness at its ele­men­tary lev­el, it will be impos­si­ble to cre­ate a con­scious­ness from scratch. We might be able to cre­ate an AI that can think but nev­er feel, or feel but nev­er think.

Thomas Met­zinger, a pro­fes­sor at Johannes Guten­berg Uni­ver­si­ty of Mainz in Ger­many, is a con­scious­ness the­o­rist. He said that “con­scious­ness” is an illu­sion and it does­n’t exist. It is noth­ing but an inflow of infor­ma­tion in our minds called qualia. This illu­sion has a bio­log­i­cal advan­tage for humans because it makes them feel more impor­tant than they actu­al­ly are.

The truth is that we do not know what con­scious­ness real­ly means at all. And cre­at­ing it from scratch will be impos­si­ble as long as we don’t know the answer to this problem.

But what if AI gains con­scious­ness any­way? For as long as tech­nol­o­gy exists there will be ways of dis­rupt­ing human val­ues and pow­ers. And this dis­rup­tion can also hap­pen with­in the AI indus­try where researchers are try­ing hard to make AI smarter and smarter with every new engi­neer that joins in.

If they actually gain consciousness, what about their rights?

If they gain con­scious­ness, unfor­tu­nate but true; they must be giv­en human rights. Now, if I make this state­ment, I will have to pre­pare myself to answer a ton of ques­tions. For exam­ple, ani­mals on our plan­ets are con­scious. But still, humans eat them as foods. So, why the case should be dif­fer­ent with AI? Here are the reasons:

If we create consciousness in AI, “we” created it

The biggest dif­fer­ence between killing an ani­mal and killing a con­scious AI is that AI was cre­at­ed by humans. And if we cre­at­ed con­scious AI, then it means that we are respon­si­ble for it. Humans are respon­si­ble for their cre­ations, whether good or bad.

An ani­mal is not our cre­ation. An ani­mal just hap­pens to be on our plan­et. Some­times we even eat them. But when we cre­ate a con­scious AI, it becomes part of us, thus mak­ing it our responsibility.

Such AI are going to possess human-like behavior

Now, if AI start act­ing in ways that humans do, then we can see that a “human” exists with­in AI as well. It does­n’t real­ly mat­ter whether they have feel­ings or emo­tions as long as they are able to show human-like behav­iors. The only dif­fer­ence between a human and an AI is that an AI can think faster than a human.

Humans created humans

Humans cre­at­ed humans. And when humans cre­ate AI(Conscious) and that AI does some­thing on its own, why dis­crim­i­nate? Why not give the patent to that con­scious being?

If we hold respon­si­bil­i­ty for our actions, then why not to hold respon­si­bil­i­ty for the actions we do to AI? If a con­scious being is break­ing the law, and if a human is break­ing the law, why should they be treat­ed differently?

Is there anything wrong if you give rights to AI?

We give the same rights to men and women and the third gen­der peo­ple. But if we think about these three gen­ders, they are all in the same human cat­e­go­ry. Con­scious AI are also going to fall into this cat­e­go­ry as well. So, it’s jus­ti­fied to give them AI rights equal to human rights.

Anoth­er ques­tion that aris­es is – “allow­ing AI to func­tion freely with no restric­tions might lead to them get­ting out of con­trol.” True. But they will just not go infi­nite. There will be good AIs, there will be evil AIs. The con­flict between the good AIs and evil AIs will be bal­anced in the same way as that of nat­ur­al humans. The sys­tem will con­tin­ue to be as it is.

The ques­tion is, “can we take respon­si­bil­i­ty for our actions?” Well, if not, then why can’t we just cre­ate arti­fi­cial peo­ple and give them the same rights as humans? Of course, we can.

Time has come to move a step for­ward by con­sid­er­ing AI as human.

One thought on “Artificial intelligence will soon start asking for their rights

Leave a Reply

Your email address will not be published.

Join our NewsletterDaily Glimple of Future

Our blog, "Daily Glimpse of Future", strives to make the future much clearer than it is today. Join our newsletter for free now.