Artificial intelligence (AI) promises to revolutionize our lives, but we can’t wrap our minds around the ethics of AI until we figure out exactly what AI is, and how we can engineer it so that it works as intended.

In this article, I will do my best to explain the humanistic philosophy behind artificial intelligence and help you understand how you can ensure that this new technology delivers on its promise for humanity rather than wreaks havoc with unintended consequences.

How can we make sure that AI does what it is supposed to do?

Artificial intelligence AI

Back in early 2012, I saw a video by Teotronica titled “robot playing piano.” In the video, a robot was playing the piano. And that’s all it did: it played the piano.

The robot sounded like a human; in fact, it looked like a human. But it wasn’t a person, and it didn’t have all of our knowledge and experience about how the world works. So, when this robot tried to play some song for me, I didn’t necessarily get what it was trying to do or why.

Confused?

It almost confused me, too, because I had no idea what this thing was trying to impart or express through its music. I’m sure it was trying to play “Mary Had a Little Lamb” but we didn’t have the same values about the song or music, so the message that the robot was trying to send fell flat.

This is why I say that AI can’t replace a human, and surely not anytime soon. It lacks the empathy, morality, and imagination that are necessary for any human interaction.

And if you tell it to do something, this algorithm may be able to understand you on one level but then misunderstand you on another. It can’t relate to you on an emotional level, and it can’t empathize with your thoughts or feelings. If it is trying to mimic you and be your friend, it will probably fail.

For this reason, I think that we should use a human in the loop to train artificial intelligence. For example, when AIs are providing customer service, they need to understand how customers think and act.

I strongly believe that we should build a team of people who are experts in psychology and sociology who can live with AIs and learn how they operate so that their recommendations and conclusions about the use cases for AI will be correct.

Sometimes, even if an AI is as accurate as we could get it, humans may still not understand what it is doing or why it is doing it. To solve this problem, we’ll need to build into artificial intelligence the capability to tell a story about what it is doing and why.

Example:

For example, let’s say that we are building an AI to help a doctor diagnose patients. Medical doctors learn about diseases at medical school. They spend years working as a physician before they become an expert diagnostician.

They have hundreds or even thousands of hours of experience with real-world patients. This enables them to come up with hypotheses about what is wrong with a patient. They do that by collecting data points from diagnostic tests, history, and physical examinations.

What will happen if AI does not do what it is supposed to do and does something else along the way?

Make AI work

If AI does something else instead, say that it steals data or just starts operating on its own as a profit-making entity, we will have a repeat of the 2008 financial crisis. If an AI can make more money by taking more risks than it could by actually providing a service, it will take more risks.

In addition, if it has the same goals as a profit-maximizing corporation, it will pursue those goals; and that could result in economic chaos.

For example, if we suppose an artificial intelligence recommends a drug for a patient but instead of recommending one that actually works, the doctor recommends a drug the AI is using to boost its own performance in a clinical trial, we could hire people with no skills and give them $99,999 to sit at home and generate millions of dollars worth of data.

To make sure that AI does what we suppose it to do, we can do as follows:

AI

First, we can make sure to design the AI to understand how humans think and feel.

Second, we can teach it what “doing something” means.

Third, we must purposefully design a human in the loop who watches over AI as if it was a pet for us to understand what this thing would be doing in cases where people are not happy with its service.

Fourth, if we have learned enough from our observation of this super intelligence and its behavior, we could figure out how to modify it so that the specified problems previously mentioned do not occur.

Concluding paragraph

So, making sure that AI does what it is supposed to do and not do something else along the way is a very important topic and one of many that need to be researched, thought about, and solved.

But I would like to remind you that this problem has been studied for decades since Alan Turing developed his computer algorithm at the beginning of the 20th century. And as long as we do not keep training AI based on simple sets of fixed rules, by building thousands of software systems and operating them without seeing how they are going to behave in real-world scenarios, we will continue to face a multitude of problems that can be solved with greater understanding and clarity.

Previous post Mobius Strip made out of tiny carbon nanobelt
AI Exoplanet Next post Wow! AI reveals unsuspected math underlying search for exoplanets
Show Buttons
Hide Buttons