AI can help reduce the biased point of view of human
Main points of the article:
- AI is not biased on its own, and it doesn’t judge according to certain characteristics such as race, religion, face or sexuality. It uses algorithms that adjust for this data.
- We don’t have to fear AI taking over our lives, but we do need to be aware of its capabilities and follow best practices in order to avoid problems.
- AI can also help us identify fraudulent transactions, but it can be abused by malicious actors to perform other malicious acts.
Human point of view is biased, and the drastic bias sometimes causes unfairness and discrimination — so, AI can demonstrate its “fairness” by helping remove the bias.
A better understanding of the phenomenon of bias in AI reveals, however, that AI merely exposes and amplifies implicit biases that already existed, but were overlooked or misunderstood.
AI is not unaffected by biases related to race, gender, and age and is resistant to the cognitive biases and discrepancies that bother humans. The only reason we observe bias in AI at all is because humans occasionally use biased data when training it and make heuristic flaws.
Since the discovery of the biases stated, all of the major technology companies have been working to improve datasets and eliminate bias. One way to eliminate bias in AI is by using AI! If that seems unlikely, read on.
Using AI to Eliminate Bias in Hiring
The classic example can be found in job opportunities. Across the spectrum of the most-coveted employment opportunities, women and people of color are notoriously under-represented.
The phenomenon is self-perpetuating, as new hires become senior leaders, and they become responsible for hiring. Affinity bias ensures that “people like me” continue to get hired, while attribution bias justifies those choices on the basis of past hires’ performance.
But when AI is given a bigger role in recruiting, this can change. Tools like Textio, Gender Decoder, and Ongig use AI to scrutinize job descriptions for hidden biases around gender and other characteristics.
For example, racial bias can be present (knowingly or unknowingly) in every aspect of the hiring process. In order to reduce such racial bias from job descriptions to resume screening to interviewing, some companies have practiced using tools like Ongig’s Job Description Text Analyzer. Ongig’s Text Analyzer quickly scans and identifies “exclusionary phrases” based on the reader’s ethnicity, primary/secondary language and immigration status.
In order to enable hiring managers to focus primarily on candidates’ qualifications and experiences, companies like Knockri, Ceridian, and Gapjumpers apply AI to erase or ignore factors that identify gender, national origin, skin color, and age.
By assessing candidates’ soft skills realistically or switching a candidate’s phone voice to hide their gender, several of these methods help lessen recency bias, affinity bias, and gender bias from the interview process.
Using AI to Remove Bias from Images
Bias can also come from an image itself and its background. In this case, AI can be trained to identify and remove the bias. We do not need images that are 100% biased-free because we only give the algorithm a few key characteristics of an image.
The use of AI in identifying bias comes in two forms: machine learning and computer vision.
Machine learning algorithms are programs that are made to learn from data, rather than being pre-programmed with specific rules on which features and features should be considered important or unimportant.
Computer vision algorithms have a variety of applications, ranging from facial recognition technology used in Apple’s iPhone X to advanced navigation systems used in self-driving cars.
When your device gets unlocked using biometrics such as with face ID, it’s using artificial intelligence to enable that functionality. Apple’s FaceID can see in 3D. It lights up your face and places 30,000 invisible infrared dots on it and captures an image.
Likewise, a self-driving car, or an autonomous or driverless car, uses a combination of sensors, cameras, radar and artificial intelligence (AI) to travel between destinations without a human operator.
Similar to how a person identifies a face with one look, machine learning can detect bias through an image and identify if it is being shown to its users. For example, a user searches for “bald man” and the algorithm may identify the search results as biased because 98% of the people who appear on that website are male; therefore, it could suggest alternatives so that the user does not face this issue.
Using AI to Remove Bias from Voice Recordings
Biases can also be found in voice recordings. A recorded voice is just a sequence of sounds that is imitated and replayed repeatedly, or performed into the microphone.
Again, AI can be trained to identify and remove bias. Perhaps the most well-known example of this is the emotional speech recognition technology developed by Google DeepMind.
Google DeepMind studies how people talk when they are in an emotional state, such as anger. This tool allows for AI to distinguish between fake emotions and real emotions, which is then able to filter out information from users who are acting in an inappropriate manner.
This technology also can filter out biased voices from authentic voices in voice recordings. The technology does not remove the voice of people who are angry, but it does remove the bias from it.
Using AI to Remove Bias from Criminal justice?
No matter if we are talking about the courtroom, in which judges and jurors can have implicit racial biases, or in a data set from which AI learns, bias can play a role.
In the courtroom, researchers have found that black defendants are perceived as older than white defendants with an identical face and hairstyle. This can affect how people perceive character traits like honesty, loyalty, and likability — and even sway jury decisions.
AI can help by removing these biases from courtroom evidence by recognizing that factors like race are irrelevant to a suspect’s guilt.
Imagine a robot judge…I give you 7 seconds.
Yes, you are right. When it comes to criminal justice, a robot has the potential to do more harm than good.
While getting biased on the basis of a person’s race, age, sex and appearance is certainly unethical and needs to be stopped, there are some other situations where the use of AI can be harmful.
For example, human emotions sometimes play a huge role in courtrooms, meaning that robots can sometimes make bad decisions because they do not understand the human factor. There are many instances where judges and juries have let emotion affect their judgment — like when the jury placed an older woman in prison for killing her abusive husband.
This is only one aspect of decision-making; there is also the issue of biases based on a person’s socioeconomic status, education level and more! Human emotions are good; they are part of what makes us human, but those same emotions can cause problems with human-robot interaction.
Using a robot judge — an algorithm that scans police camera footage for different factors, such as body language, voice, and other visual cues that relate to emotions, mental state and personality — could have some benefits.
We should not let AI take over all of our decision-making processes and replace our human judges. Instead, the robots should be used to detect patterns so that humans can look at these patterns themselves and decide what punishment is appropriate when they see them emerge in real life situations.
But, What if the AI itself is Biased?
AI is not programmed to be biased and it analyzes information in a way we cannot fully comprehend or predict. Except the recently hot “Racist and Sexist robot”.
As part of a recent experiment, scientists asked specially programmed robots to scan blocks with peoples’ faces on them, then put the “criminal” in a box. The robots repeatedly chose a block with a Black man’s face.
Basically, it’s good to create a human-level AI, but the problem is that if it’s human-level, it will be biased.
You see, in reality, even if we just train AI with algorithms to discount and remove bias, they are still influenced by the data they start with.
In tech companies like Google and Microsoft, it’s not uncommon to use a new algorithm. The algorithms that assess, sort, and rank results for search queries are also inherently biased ‑that is, they give priority to results that fit certain criteria. Google’s algorithms are not available to the public. However, they share best practices, which reveal the types of content they are biased towards.
Now, a biased AI would mean something much worse than a biased human being. What can we do?
I would like to bring up one of my favorite quotes about bias by commentator Will Rogers, “I am not a member of any organized political party — I am a Democrat”.
He was referring to the fact that he was not affiliated with the Democratic Party but rather supports the values held by them. Throughout the 1920s and 1930s, Rogers gained wide comedic popularity as a film star, radio show host, and popular writer.
We as humans should be able to recognize when we are being biased — and AI should never make us feel bad about our biases or fears.
First of all, AI should not be allowed to have complete control over “removing bias” thing. If AI starts to interfere with that, we need to take action immediately.
Secondly, it’s important to understand that bias is real, it’s here, and it’s going to affect everyone. In order to recognize bias, we need to accept it and reevaluate our lives.