AI can help reduce the biased point of view of human

Main points of the article:

  • AI is not biased on its own, and it does­n’t judge accord­ing to cer­tain char­ac­ter­is­tics such as race, reli­gion, face or sex­u­al­i­ty. It uses algo­rithms that adjust for this data.
  • We don’t have to fear AI tak­ing over our lives, but we do need to be aware of its capa­bil­i­ties and fol­low best prac­tices in order to avoid problems.
  • AI can also help us iden­ti­fy fraud­u­lent trans­ac­tions, but it can be abused by mali­cious actors to per­form oth­er mali­cious acts.

Human point of view is biased, and the dras­tic bias some­times caus­es unfair­ness and dis­crim­i­na­tion — so, AI can demon­strate its “fair­ness” by help­ing remove the bias.

A bet­ter under­stand­ing of the phe­nom­e­non of bias in AI reveals, how­ev­er, that AI mere­ly expos­es and ampli­fies implic­it bias­es that already exist­ed, but were over­looked or misunderstood.

AI is not unaf­fect­ed by bias­es relat­ed to race, gen­der, and age and is resis­tant to the cog­ni­tive bias­es and dis­crep­an­cies that both­er humans. The only rea­son we observe bias in AI at all is because humans occa­sion­al­ly use biased data when train­ing it and make heuris­tic flaws.

Since the dis­cov­ery of the bias­es stat­ed, all of the major tech­nol­o­gy com­pa­nies have been work­ing to improve datasets and elim­i­nate bias. One way to elim­i­nate bias in AI is by using AI! If that seems unlike­ly, read on.

Using AI to Eliminate Bias in Hiring

The clas­sic exam­ple can be found in job oppor­tu­ni­ties. Across the spec­trum of the most-cov­et­ed employ­ment oppor­tu­ni­ties, women and peo­ple of col­or are noto­ri­ous­ly under-represented.

The phe­nom­e­non is self-per­pet­u­at­ing, as new hires become senior lead­ers, and they become respon­si­ble for hir­ing. Affin­i­ty bias ensures that “peo­ple like me” con­tin­ue to get hired, while attri­bu­tion bias jus­ti­fies those choic­es on the basis of past hires’ performance.

But when AI is giv­en a big­ger role in recruit­ing, this can change. Tools like Tex­tio, Gen­der Decoder, and Ongig use AI to scru­ti­nize job descrip­tions for hid­den bias­es around gen­der and oth­er characteristics.

For exam­ple, racial bias can be present (know­ing­ly or unknow­ing­ly) in every aspect of the hir­ing process. In order to reduce such racial bias from job descrip­tions to resume screen­ing to inter­view­ing, some com­pa­nies have prac­ticed using tools like Ongig’s Job Descrip­tion Text Ana­lyz­er. Ongig’s Text Ana­lyz­er quick­ly scans and iden­ti­fies “exclu­sion­ary phras­es” based on the reader’s eth­nic­i­ty, primary/secondary lan­guage and immi­gra­tion status.

In order to enable hir­ing man­agers to focus pri­mar­i­ly on can­di­dates’ qual­i­fi­ca­tions and expe­ri­ences, com­pa­nies like Knock­ri, Cerid­i­an, and Gapjumpers apply AI to erase or ignore fac­tors that iden­ti­fy gen­der, nation­al ori­gin, skin col­or, and age.

By assess­ing can­di­dates’ soft skills real­is­ti­cal­ly or switch­ing a can­di­date’s phone voice to hide their gen­der, sev­er­al of these meth­ods help lessen recen­cy bias, affin­i­ty bias, and gen­der bias from the inter­view process.

Using AI to Remove Bias from Images

Bias can also come from an image itself and its back­ground. In this case, AI can be trained to iden­ti­fy and remove the bias. We do not need images that are 100% biased-free because we only give the algo­rithm a few key char­ac­ter­is­tics of an image.

The use of AI in iden­ti­fy­ing bias comes in two forms: machine learn­ing and com­put­er vision.

Machine learn­ing algo­rithms are pro­grams that are made to learn from data, rather than being pre-pro­grammed with spe­cif­ic rules on which fea­tures and fea­tures should be con­sid­ered impor­tant or unimportant.

Com­put­er vision algo­rithms have a vari­ety of appli­ca­tions, rang­ing from facial recog­ni­tion tech­nol­o­gy used in Apple’s iPhone X to advanced nav­i­ga­tion sys­tems used in self-dri­ving cars.

When your device gets unlocked using bio­met­rics such as with face ID, it’s using arti­fi­cial intel­li­gence to enable that func­tion­al­i­ty. Apple’s FaceID can see in 3D. It lights up your face and places 30,000 invis­i­ble infrared dots on it and cap­tures an image.

Like­wise, a self-dri­ving car, or an autonomous or dri­ver­less car, uses a com­bi­na­tion of sen­sors, cam­eras, radar and arti­fi­cial intel­li­gence (AI) to trav­el between des­ti­na­tions with­out a human operator.

Sim­i­lar to how a per­son iden­ti­fies a face with one look, machine learn­ing can detect bias through an image and iden­ti­fy if it is being shown to its users. For exam­ple, a user search­es for “bald man” and the algo­rithm may iden­ti­fy the search results as biased because 98% of the peo­ple who appear on that web­site are male; there­fore, it could sug­gest alter­na­tives so that the user does not face this issue.

Using AI to Remove Bias from Voice Recordings

Bias­es can also be found in voice record­ings. A record­ed voice is just a sequence of sounds that is imi­tat­ed and replayed repeat­ed­ly, or per­formed into the microphone.

Again, AI can be trained to iden­ti­fy and remove bias. Per­haps the most well-known exam­ple of this is the emo­tion­al speech recog­ni­tion tech­nol­o­gy devel­oped by Google DeepMind.

Google Deep­Mind stud­ies how peo­ple talk when they are in an emo­tion­al state, such as anger. This tool allows for AI to dis­tin­guish between fake emo­tions and real emo­tions, which is then able to fil­ter out infor­ma­tion from users who are act­ing in an inap­pro­pri­ate manner.

This tech­nol­o­gy also can fil­ter out biased voic­es from authen­tic voic­es in voice record­ings. The tech­nol­o­gy does not remove the voice of peo­ple who are angry, but it does remove the bias from it.

Using AI to Remove Bias from Criminal justice?

No mat­ter if we are talk­ing about the court­room, in which judges and jurors can have implic­it racial bias­es, or in a data set from which AI learns, bias can play a role.

In the court­room, researchers have found that black defen­dants are per­ceived as old­er than white defen­dants with an iden­ti­cal face and hair­style. This can affect how peo­ple per­ceive char­ac­ter traits like hon­esty, loy­al­ty, and lik­a­bil­i­ty — and even sway jury decisions.

AI can help by remov­ing these bias­es from court­room evi­dence by rec­og­niz­ing that fac­tors like race are irrel­e­vant to a suspect’s guilt.

Imagine a robot judge…I give you 7 seconds.
use of unbiased AI in criminal justice

Yes, you are right. When it comes to crim­i­nal jus­tice, a robot has the poten­tial to do more harm than good.

While get­ting biased on the basis of a per­son­’s race, age, sex and appear­ance is cer­tain­ly uneth­i­cal and needs to be stopped, there are some oth­er sit­u­a­tions where the use of AI can be harmful.

For exam­ple, human emo­tions some­times play a huge role in court­rooms, mean­ing that robots can some­times make bad deci­sions because they do not under­stand the human fac­tor. There are many instances where judges and juries have let emo­tion affect their judg­ment — like when the jury placed an old­er woman in prison for killing her abu­sive husband.

This is only one aspect of deci­sion-mak­ing; there is also the issue of bias­es based on a person’s socioe­co­nom­ic sta­tus, edu­ca­tion lev­el and more! Human emo­tions are good; they are part of what makes us human, but those same emo­tions can cause prob­lems with human-robot interaction.

Using a robot judge — an algo­rithm that scans police cam­era footage for dif­fer­ent fac­tors, such as body lan­guage, voice, and oth­er visu­al cues that relate to emo­tions, men­tal state and per­son­al­i­ty — could have some benefits.

We should not let AI take over all of our deci­sion-mak­ing process­es and replace our human judges. Instead, the robots should be used to detect pat­terns so that humans can look at these pat­terns them­selves and decide what pun­ish­ment is appro­pri­ate when they see them emerge in real life situations.

But, What if the AI itself is Biased?

AI is not pro­grammed to be biased and it ana­lyzes infor­ma­tion in a way we can­not ful­ly com­pre­hend or pre­dict. Except the recent­ly hot “Racist and Sex­ist robot”.

As part of a recent exper­i­ment, sci­en­tists asked spe­cial­ly pro­grammed robots to scan blocks with peo­ples’ faces on them, then put the “crim­i­nal” in a box. The robots repeat­ed­ly chose a block with a Black man’s face.

Basi­cal­ly, it’s good to cre­ate a human-lev­el AI, but the prob­lem is that if it’s human-lev­el, it will be biased.

You see, in real­i­ty, even if we just train AI with algo­rithms to dis­count and remove bias, they are still influ­enced by the data they start with.

In tech com­pa­nies like Google and Microsoft, it’s not uncom­mon to use a new algo­rithm. The algo­rithms that assess, sort, and rank results for search queries are also inher­ent­ly biased ‑that is, they give pri­or­i­ty to results that fit cer­tain cri­te­ria. Google’s algo­rithms are not avail­able to the pub­lic. How­ev­er, they share best prac­tices, which reveal the types of con­tent they are biased towards.

Now, a biased AI would mean something much worse than a biased human being. What can we do?

I would like to bring up one of my favorite quotes about bias by com­men­ta­tor Will Rogers, “I am not a mem­ber of any orga­nized polit­i­cal par­ty — I am a Democrat”.

He was refer­ring to the fact that he was not affil­i­at­ed with the Demo­c­ra­t­ic Par­ty but rather sup­ports the val­ues held by them. Through­out the 1920s and 1930s, Rogers gained wide comedic pop­u­lar­i­ty as a film star, radio show host, and pop­u­lar writer.

We as humans should be able to rec­og­nize when we are being biased — and AI should nev­er make us feel bad about our bias­es or fears.

First of all, AI should not be allowed to have com­plete con­trol over “remov­ing bias” thing. If AI starts to inter­fere with that, we need to take action immediately.

Sec­ond­ly, it’s impor­tant to under­stand that bias is real, it’s here, and it’s going to affect every­one. In order to rec­og­nize bias, we need to accept it and reeval­u­ate our lives.

Leave a Reply

Your email address will not be published.

Join our NewsletterDaily Glimple of Future

Our blog, "Daily Glimpse of Future", strives to make the future much clearer than it is today. Join our newsletter for free now.