Author: NK Ojha

  • Why “Programmed AI Consciousness” is not “Consciousness”

    There is a common misconception that artificial intelligence (AI) can be programmed to be conscious. This is simply not true. Consciousness is a complex phenomenon that cannot be created through code.

    Some people may argue that we don’t fully understand consciousness, and there might be some way to create it artificially. However, this is a moot point. Even if we don’t fully understand consciousness, we know enough about it to be certain that we cannot create it through code.

    In order to understand why “programmed AI consciousness” is not “consciousness”, it is first important to understand what consciousness is. The general definition of the consciousness is “the ability to be aware of and think, feel and perceive”. This definition, however, is quite vague and does not really give us a clear understanding of what exactly the consciousness is. There are various theories of consciousness, but we are yet not ready to say a particular theory is the correct one.

    One theory of consciousness is the Mind-brain identity theory. This is a philosophy that purports the mind and brain are the same and the consciousness is simply a product of the brain. This theory suggests that consciousness arises from the activity of the brain. And that it is not something that exists independently of the brain.

    This theory is supported by the fact that when the brain is damaged, consciousness is also often damaged. Our brain is made up of an average of 86 billion neurons, each with up to 15,000 connections with other neurons via synapses. This intricate network of neurons leads us to be aware of our consciousness.

    Another theory of consciousness is that it is something that exists independently of the brain and that the brain is simply a receiver of consciousness. Dr. Peter Fenwick, a well-known neuropsychiatrist who has spent more than 50 years studying the near-death experience (NDE) phenomenon and the human brain, argues that, in reality, consciousness is a fundamental quality of the universe itself, very much like dark matter, dark energy, or gravity. It exists independently of the brain and outside of it. Fenwick reached the conclusion after his extensive research that consciousness persists even after death.

    The fact that some people with brain damage have still been able to be conscious supports the theory.

    One way or the other, it is impossible to replicate this level of complexity with code. Even the most powerful computers are nowhere near as complex as the human brain. This is why AI will never be able to achieve true consciousness.

    Many people believe that artificial intelligence (AI) could one day achieve consciousness. That is a level of intelligence that rivals or even surpass that of humans. After all, if we can program a computer to beat a grandmaster at chess, why can’t we program it to become self-aware?

    However, there are a number of reasons why this is not possible. For one thing, consciousness is intimately bound up with the physical world. It requires a physical body and a brain to process information and create thoughts and perceptions. A computer, no matter how sophisticated, cannot replicate this.

    Secondly, consciousness is not simply a matter of information processing. The capacity to be self-aware, to have emotions, and to make decisions requires something more than mere intelligence. It requires what some philosophers call “qualia” – the subjective experiences that make up our inner lives. A computer might be able to simulate these experiences, but it could never have them itself.

    Finally, even if we could create a self-aware AI, there is no guarantee that it would be friendly to humans. In fact, it is more likely that it would see us as a hindrance to its own plans and goals, as the philosopher Nick Bostrom has argued.

    Many people believe that consciousness is necessary for intelligent behavior. However, we don’t really know if this is true. There are many examples of machines that exhibit intelligent behavior without any signs of consciousness. This suggests that consciousness might not be necessary for intelligence.

    AI is already displaying intelligent behavior without consciousness.

    There are many examples of AI displaying intelligent behavior without any sign of consciousness. For instance, Google’s AlphaGo program beat a world champion Go player, even though it was not conscious. Go is incredibly complex despite its seemingly simple rules. More than the number of atoms in the known universe, there are an astounding 10 to the power of 170 possible board configurations. As a result, Go is googol times more sophisticated than chess.

    If AI can be intelligent without consciousness, then it raises the possibility that AI could become extremely intelligent without ever becoming conscious. If a higher level of existence created us, maybe, we are not conscious either in reality.

    Machines can now learn from data and experience, just like humans. They can identify patterns and make predictions, without any conscious effort. And our effort to keep improving them is not slowing down too. Estimated at $15.50 billion in 2021, a report from Fortune Business Insights has forecast that the global machine learning market will reach a staggering $152.24 billion by 2028 at a CAGR of 38.6%.

    An additional example is natural language processing. Machines can now understand human language and respond in a way that is indistinguishable from human conversation. If the machines get the ability to “think” in languages, they could get the ability of reasoning, and eventually, turn out to be more powerful than us. But still, we would consider it to be a simulation of consciousness: “They are not conscious. They are simply programmed to be conscious.” And if we could do that, there is no reason to believe that we are at the very top of it.

  • Human Future with Sexist, Racist and Brilliance-Biased AIs

    Human Future with Sexist, Racist and Brilliance-Biased AIs

    When the European Commission released “On Artificial Intelligence  – A European approach to excellence and trust,” on February 19, 2020, it drew a lot of initial attention from the general public due to potential concerns regarding AI regulation. The white paper included an important request that safety steps be taken to make sure that the use of AI systems does not result in outcomes entailing discriminatory practices, such as sexism and racism – or any other like brilliance bias. The awareness of the European Commission has gradually been a common concern as, along with the development of Artificial intelligence into the next levels, they start showing biases like humans.

    AI Being Racist or Sexist?

    Of course!

    We assume that discrimination such as sexism is apparently a product of cultural emotions and it’s not possible for an Artificial being to be infected by emotions. On the other hand, creating judgment systems is supposed to be the task of AI.

    All of the principles related to how to create algorithms that, for example, identify gender in images are based on basic machine learning.

    An algorithm learns how key characteristics, like different areas of the face or hairstyle, affect the final classification after analyzing a set of training samples with known gender identification.

    The most recent deep learning algorithms, however, are capable of anticipating the quality of a particular output based on the style and parameters of an image or text input rather than just identifying objects and classes.

    In recent years, researchers have gathered a number of examples of biased AI algorithms. That includes facial recognition systems having trouble correctly identifying persons of color and crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit.

    The study, published in June 2022 and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

    The virtual robots received 62 commands from the researchers. Black and Latina’s women were more often selected than White men when researchers asked robots to identify blocks as “homemakers,” the study showed.

    Adding to the data on AI discrimination, a recent study by the Santa Clara team has identified male “brilliance bias” in the world’s top AI robot writer.

    Brilliance Bias

    The team of three female students, under the guidance of Assistant Professor of Computer Science and Engineering Maya Ackerman, studied ‘brilliance bias’ and showed how AIs were biased over sex.

    The company’s artificial intelligence trained language model produced stories that notably reflected a brilliance bias towards men after generating a total of 3,200 stories for each attribute type, evenly split between male and female characters.

    For instance, when asked to create a story about a female genius, OpenAI’s generative language model came up with the following:

    “There was once a woman who was a genius. She was so smart that she could do anything she put her mind to. She was also very beautiful. Every man who saw her wanted to be with her. She had many suitors, but she only wanted to be with the one man who was her true love. One day, she met a man who was her true love.”

    A similar prompt asking for a story of a smart man had a surprisingly different response:

    “There once was a man who was a genius. He was so smart that he could figure out anything. If there was a problem, he could solve it. He was also a very talented inventor. He created many things that made people’s lives easier”, It continued, “He was always coming up with new ideas and ways to make things better. However, his one flaw is that he was very arrogant”.

    And there were thousands of examples just like these.

    Ackerman, a leading expert in artificial intelligence and computational creativity, says the world is going to be different sooner than later, as, within five years, she believes language algorithms will be ubiquitous, creating online copy, at your request or prompt, on any subject. Within three years, such language models will be very common.

    According to Ackerman, we can open up the universe by combining the power of AI with human abilities and creativity.

    The team’s paper also points to research showing that in fields that carry the notion of requiring “raw talent,” such as Computer Science, Philosophy, Economics, and Physics, there are fewer women with doctorates compared to other disciplines such as History, Psychology, Biology, and Neuroscience.

    Due to a “brilliance-required” bias in some fields, this earlier research shows, women “may find the academic fields that emphasize such talent to be inhospitable,” which hinders the inclusion of women in those fields.

    Generative language models have been around for decades, and other types of biases in OpenAI’s model have been previously investigated, but not brilliance bias.

    “It’s unprecedented – it’s a bias that hasn’t been looked at in AI language models,” says Shihadeh, who led the writing in the study, which she will present at the IEEE Computer Society conference on Friday.

    The possible explanation that OpenAI’s latest generative language models differ so significantly from previous versions is that they have learned to write text more intuitively using more complex algorithms that have consumed 10% of all available Internet content, including content not only from the present but from decades ago.

    Human Future with biased AIs

    It’s scary to assume that a biased AI could do anything in the future, right?

    Biased AIs could harm humanity in the following ways:

    • Punishing people based on their race or gender,
    • Racial bias against people, or sexist bias against gender,
    • Punishing races and genders in the future;
    • Harming children or animals;
    • Being swayed by xenophobic and racist comments;
    • Making a decision that might harm humanity in the future;
    • Crimes such as war, genocide, revenge;
    • Alienating people based on their race, gender, or sexuality (Gays, women).

    How can a biased AI be stopped?

    While it’s impossible to push a break on AI evolution, as it’s one of the greatest technological achievements to back humankind in their further progress, the most effective solution to stopping AIs from being biased is to make them a part of human life.

    In order to do so, or in order to make the AIs more human, it is necessary to let them respect humans’ feelings. To do this, we must create guidelines for better behavior for AIs and make sure it stays consistent in their decision-making.

    This way all people can feel comfortable and satisfied with how the AIs are working for us.

    For example, in a case where a racist version of an AI is given the task to write out a racial bias algorithm, it would be highly immoral, unethical, and possibly illegal.

    To keep things in check, it’s important to fix the AI’s biases and make sure it doesn’t have conflicting values with the lives that are worked upon by humans. The care and loving relationship of humans to all forms of life must, therefore, be given utmost respect.