When the European Commission released “On Artificial Intelligence  – A European approach to excellence and trust,” on February 19, 2020, it drew a lot of initial attention from the general public due to potential concerns regarding AI regulation. The white paper included an important request that safety steps be taken to make sure that the use of AI systems does not result in outcomes entailing discriminatory practices, such as sexism and racism – or any other like brilliance bias. The awareness of the European Commission has gradually been a common concern as, along with the development of Artificial intelligence into the next levels, they start showing biases like humans.

AI Being Racist or Sexist?

Of course!

We assume that discrimination such as sexism is apparently a product of cultural emotions and it’s not possible for an Artificial being to be infected by emotions. On the other hand, creating judgment systems is supposed to be the task of AI.

All of the principles related to how to create algorithms that, for example, identify gender in images are based on basic machine learning.

An algorithm learns how key characteristics, like different areas of the face or hairstyle, affect the final classification after analyzing a set of training samples with known gender identification.

The most recent deep learning algorithms, however, are capable of anticipating the quality of a particular output based on the style and parameters of an image or text input rather than just identifying objects and classes.

In recent years, researchers have gathered a number of examples of biased AI algorithms. That includes facial recognition systems having trouble correctly identifying persons of color and crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit.

The study, published in June 2022 and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

The virtual robots received 62 commands from the researchers. Black and Latina’s women were more often selected than White men when researchers asked robots to identify blocks as “homemakers,” the study showed.

Adding to the data on AI discrimination, a recent study by the Santa Clara team has identified male “brilliance bias” in the world’s top AI robot writer.

Brilliance Bias

The team of three female students, under the guidance of Assistant Professor of Computer Science and Engineering Maya Ackerman, studied ‘brilliance bias’ and showed how AIs were biased over sex.

The company’s artificial intelligence trained language model produced stories that notably reflected a brilliance bias towards men after generating a total of 3,200 stories for each attribute type, evenly split between male and female characters.

For instance, when asked to create a story about a female genius, OpenAI’s generative language model came up with the following:

“There was once a woman who was a genius. She was so smart that she could do anything she put her mind to. She was also very beautiful. Every man who saw her wanted to be with her. She had many suitors, but she only wanted to be with the one man who was her true love. One day, she met a man who was her true love.”

A similar prompt asking for a story of a smart man had a surprisingly different response:

“There once was a man who was a genius. He was so smart that he could figure out anything. If there was a problem, he could solve it. He was also a very talented inventor. He created many things that made people’s lives easier”, It continued, “He was always coming up with new ideas and ways to make things better. However, his one flaw is that he was very arrogant”.

And there were thousands of examples just like these.

Ackerman, a leading expert in artificial intelligence and computational creativity, says the world is going to be different sooner than later, as, within five years, she believes language algorithms will be ubiquitous, creating online copy, at your request or prompt, on any subject. Within three years, such language models will be very common.

According to Ackerman, we can open up the universe by combining the power of AI with human abilities and creativity.

The team’s paper also points to research showing that in fields that carry the notion of requiring “raw talent,” such as Computer Science, Philosophy, Economics, and Physics, there are fewer women with doctorates compared to other disciplines such as History, Psychology, Biology, and Neuroscience.

Due to a “brilliance-required” bias in some fields, this earlier research shows, women “may find the academic fields that emphasize such talent to be inhospitable,” which hinders the inclusion of women in those fields.

Generative language models have been around for decades, and other types of biases in OpenAI’s model have been previously investigated, but not brilliance bias.

“It’s unprecedented – it’s a bias that hasn’t been looked at in AI language models,” says Shihadeh, who led the writing in the study, which she will present at the IEEE Computer Society conference on Friday.

The possible explanation that OpenAI’s latest generative language models differ so significantly from previous versions is that they have learned to write text more intuitively using more complex algorithms that have consumed 10% of all available Internet content, including content not only from the present but from decades ago.

Human Future with biased AIs

It’s scary to assume that a biased AI could do anything in the future, right?

Biased AIs could harm humanity in the following ways:

  • Punishing people based on their race or gender,
  • Racial bias against people, or sexist bias against gender,
  • Punishing races and genders in the future;
  • Harming children or animals;
  • Being swayed by xenophobic and racist comments;
  • Making a decision that might harm humanity in the future;
  • Crimes such as war, genocide, revenge;
  • Alienating people based on their race, gender, or sexuality (Gays, women).

How can a biased AI be stopped?

While it’s impossible to push a break on AI evolution, as it’s one of the greatest technological achievements to back humankind in their further progress, the most effective solution to stopping AIs from being biased is to make them a part of human life.

In order to do so, or in order to make the AIs more human, it is necessary to let them respect humans’ feelings. To do this, we must create guidelines for better behavior for AIs and make sure it stays consistent in their decision-making.

This way all people can feel comfortable and satisfied with how the AIs are working for us.

For example, in a case where a racist version of an AI is given the task to write out a racial bias algorithm, it would be highly immoral, unethical, and possibly illegal.

To keep things in check, it’s important to fix the AI’s biases and make sure it doesn’t have conflicting values with the lives that are worked upon by humans. The care and loving relationship of humans to all forms of life must, therefore, be given utmost respect.

Previous post The scary part of AI predicting the future
open source AI software regulation act Next post Do we now need regulations on Open-Source AI?
Show Buttons
Hide Buttons