The mind is an incredibly complex entity, so much so that even the most powerful artificial intelligence(AI) in existence—the human brain—isn’t fully understood. It remains a feat of immense intellectual and technical prowess and is still the most powerful intelligence ever created. So, what would happen if we tried to use our understanding of the human brain (or even some aspects of it) in order to get modeled an AI system that could approximate how a normal person behaves?
That’s the question this post is going to explore. Hopefully, it will help you make an informed decision about whether trying to emulate a human mind for your own AI system is something you want to pursue or avoid.
Firstly, what does getting an AI system modeled on the human brain mean?
It’s important to understand that getting an AI system modeled on the human brain doesn’t mean that you’re trying to turn your AI system into a human (although it might make it appear more human-like).
You should keep in mind that the brain isn’t even really a computer. It’s but rather a series of incredibly complex circuits and algorithms that respond to inputs. It’s far beyond any conceivable technology we could create, so attempting to model the mind of that very advanced process is just silly.
Rather, modeling an AI system on the brain means that you’re trying to use the brain’s circuits, algorithms, and structure as a guide or template. You’re trying to emulate some of the processes involved in the brain’s operation so that you can emulate some of the processes involved in human thought.
That’s why a lot of people think that using an AI to model how we think is a good idea — both from an intellectual curiosity standpoint and from an AI development standpoint. They don’t want to create any form of machine intelligence that isn’t human-like. They want their system to exhibit the same intelligence that humans exhibit when they’re thinking.
Douglas Hofstadter’s idea about getting the human brain modeled on an AI
The most famous proponent of implementing an AI with a human-like model is certainly Douglas Hofstadter. His book, I Am a Strange Loop, is one of the most well-known books in this field.
His idea isn’t necessarily to copy how our brain works, but rather to try and understand and emulate the system that gives our thoughts their form and structure. He even argues that trying to model the human brain on an AI is likely more beneficial than trying to design an AI system using our understanding of the brain as a guide.
At the very least, he argues that trying to understand our own thinking processes is a lot more interesting than simply making an AI system that behaves similarly.
The advantage of getting an AI system modeled on the brain is that you can easily isolate, characterize, and reproduce these processes. So a system created by replicating these structures and functions of the human brain would be a lot easier to debug and verify than an AI system in which it’s impossible for humans to replicate how we think.
In fact, understanding how our brain works may even help us discover new ways to improve it. We might be able to use our understanding of how we think in creating better AIs.
What problems might you run into if you attempted to get an AI system modeled on the human brain?
- The first major problem you run into is an ethical concern. The human brain isn’t something that can be easily replicated using today’s technology; there’s no way we could recreate the brain in a way that wouldn’t have adverse effects. Even if we had the technological capabilities, it would be unethical and immoral to replicate it in any fashion.
As I mentioned, the human brain isn’t easy to understand and replicate (it’s one of the most complex things in existence), but we might try to get around that obstacle by simply trying to clone a human brain. It seems unlikely that anyone would be able to do so, but it’s possible.
- The other problem is that it’s unlikely you’ll know how to get a human brain correctly modeled in any manner that won’t result in failures. Part of what makes the brain so complex is that it’s constantly changing and adapting. Because of this, it would be incredibly difficult to create a system that accurately reflects how the brain works — at least, it would be difficult compared to how easy the brain is for humans to understand and replicate.
- Lastly, it’s also extremely expensive to model the human brain correctly. That’s because it’s basically impossible for humans to replicate how the brain functions. It’s much cheaper to use simulated intelligence than human intelligence since we know how to simulate an AI system properly and then let it evolve and develop on its own.
But still: we do it?
Well, the one-word answer to this question: Yes!
In fact, there are a few reasons to believe that we can create AI systems that model how we think. It’s not necessarily a very easy process, but it’s possible.
Most of our thoughts don’t involve a lot of planning. We’re not trying to form complex plans for the future; most of our thoughts involve common sense and simple reasoning — things that robots could model fairly easily.
Our thoughts are much more complex than any AI system could possibly be able to simulate. But that doesn’t mean the human brain can’t be modeled on an AI system. We can come up with a rough approximation of how we think, which is great because it’ll be relatively easy to debug and verify.
We might not be able to create a perfect copy of the human brain. But, that doesn’t mean an AI system can’t model how it works in some form. That’s still enough to handle most of the jobs that AI systems need to perform.
I’m not saying that the human brain is easy to replicate, but it’s certainly possible. And that might be enough for us to create a system that can learn from us in a similar way to how we learn from each other.
Clearly, it’s a different way of getting the mind modeled; but it works for the purpose of learning and developing AI systems. So it seems like a realistic way to get the human mind modeled.
What’s next for human-like AI?
Right now, we’re still a long way away from creating an AI system that can truly understand the way humans think. But that doesn’t mean it won’t happen in the future. It’s quite possible that within our lifetimes, we’ll develop a system that can simulate how humans think.
The reason I say this is because there’s so much research and development going on right now — and so many people trying to advance the technology — that getting an AI system modeled on the brain of a human being seems likely.