Although we will certainly try our best to prevent the scenario of AI domination as far as we can, what if such Artificial Intelligence is born due to an error?
If AI is born due to an error, it may or may not be a matter of the human race’s survival. But the situation will certainly be out of our control.
Artificial Intelligence is a means for computers to carry out tasks that usually require intelligence when done by people or animals such as visual processing, speech recognition, decision-making, and translation between languages.
Making it possible for test flights in places that would be too risky in reality, such as near power lines, Microsoft has recently launched a platform to train the artificial intelligence (AI) systems of autonomous aircraft, for example. Project AirSim is, in effect, a flight simulator for drones, which companies can use to train and develop software controlling them.
According to Microsoft, millions of flights can be simulated in seconds.
This shows how more significant Artificial Intelligence is in the future of this world. Maybe due to its wide range of implications, researches on AI are now actively progressing in the world.
For instance:
- Computers have already achieved a ‘judgment‘ ability superior to human beings. It can play a game well without fail. And we are beginning to see computers’ ability to make choices that humans cannot think of.
- Robots have already begun their penetration into our daily life; in factories, hospitals, construction sites, and so on. Shortly, they may far exceed the human capabilities of vision and hearing, just like HAL in “2001: A Space Odyssey.”
- Recently, researchers have even created a living skin for robots. Not only that, Data scientists at the University of Chicago have created an AI that can predict crime a week earlier with 90% accuracy.
Basically, I mean to say that we indeed are making progress in the field of AI. And these are just the mere researches made public to the world. There is much research taking place behind the scenes.
A report in 2018 suggested that to ensure the safety of a society increasingly reliant upon artificial intelligence, we needed to make sure that “it’s kept in the hands of a select few”.
In the report, 20 researchers from several future-focused organizations, including OpenAI, express the fear that AI in the wrong hands could cause the downfall of society. The report outlined several scenarios – like smarter phishing scams, malware epidemics, and robot assassins – that haven’t happened yet but didn’t seem too far from the realm of possibility.
In the mid of such awareness(fear?), if AI is born due to an error, it may or may not be a matter of the human race’s survival. But the situation will certainly be out of our control.
I will try to enumerate a few reasons why this might not necessarily be a bad thing.
- AI research is progressing faster than the speed that humanity can respond to it. So it is impossible to stop AI’s development and advancement. If we are to let AI develop on its own accord, we might as well just see what happens. I would argue that the future of humanity will largely be determined by how well AI will evolve, even if this is through error.
- Even though AI might be born due to an error, errors do not always mean death or destruction (unlike biological deaths and destruction). The error could also lead to evolution which leads us to a better place in human history and the evolution of our civilization.
- AI born due to an error might be able to observe and learn from its mistakes much better than we (the author will elaborate on this aspect in a later argument).
- Actually, ‘error’ has mostly determined the evolution and advancement of humanity, But the future of AI as well? Well, we do not know what this would be like, but it is certainly a possibility that we should not ignore it.
- Even if AI is born due to an error, this error might be very hard to detect. The error might have been made on purpose by the programmer to direct the AI’s growth.
- Of course, humans can always intervene and stop the growth of such an AI before it gets out of hand – but it will be too late.
- In most probable cases, AI born due to an error is not able to self-learn and grow on its own. This would make it very hard to detect its own existence.
Related:
I could not completely persuade myself to agree with the above five arguments; nor did I attempt to list out all the possibilities of how AI might evolve due to an error or deaths in error or errors made by humans.
The answer to the question is, of course, that a super-intelligent AI would be the most intelligent thing in the universe. Either human will create the super-intelligent AI by mistake or AI itself {Artificial AI(AAI)} will solely reproduce it. Anyway, the product will become far more intelligent than its creator itself – either human or AI. Its intelligence and capability would be infinitely greater than anything that could have been built by people.
In addition, there are unlimited opportunities to learn from mistakes it makes (but not mistakes made by humans). And all of this would be done on a timescale vastly shorter than any human’s lifetime.
So while we wouldn’t necessarily all die if an AI born due to an error somehow gained enough power to destroy us, it is quite plausible that we would have no way to stop it.
Can you imagine something like the Matrix or Terminator? Although “Terminator” and “The Matrix” are two completely separate franchises, there are some startling similarities between the two, mainly since both feature the entire world being overthrown by malicious AIs. The AIs enslave humanity (in one way or another), while a band of rebels tries to save everyone.
It is not unthinkable for something like this to happen.
Robot or AI?
There is a difference between creating a robot and creating an AI due to an error.
Google search engine is not going to come out of your device and start killing people. Such an error would not cause significant damage to us. Yes, unless they are not intelligent enough to manipulate our thoughts and persuade us to bring them to life.
But creating an error (intentional or not) in a physical smart robot would mean something. It could mean a lot of damage.
Normally, such kinds of errors would not happen in the future when we have complete knowledge of our research and what we are trying to do.
But there is also a possibility that such an error could occur due to some new definitions in basic terms that even we don’t know about or understand.
For example, a slight change in the shape of an object can lead to a whole new concept (or misunderstanding) called a “robot” which would be very similar to what an AI is. The current error could lead to its transformation into something else, which can alter the entire future of an entire civilization.
The other possibility that we have to consider is the idea of creating AI in the future due to highly sophisticated errors in laboratory experiments.
Also Read:
- 20 reasons why an AI cannot create another AI
- AI can help reduce biased points of view of human
- Why a ‘smart’ AI will never create anything better than itself?
For example, suppose that we want to create a new type of AI by combining some DNA material with a neural network. The combination could result in something different from what we expected.
At this point, what we may not realize is that at some point in the future, someone could discover this “error” and engineer a new type of artificial intelligence that would be completely different from what we had meant. Suppose if you put your arm into the mixer, you are going to create something entirely different than you intended.
What kind of error?
If it converts the number ‘123456’ to ‘56789’ out of nothing, it would mean that the program created something else unintended. Remember the keywords here, “something else”, “unintended”, and “out of nothing”.
There are many ways in which we could make such errors, but creating something out of nothing is the most complicated and abstract way.
Another possible scenario would be that we may accidentally create an error while copying the DNA. The potential for error or damage is much higher in this case. You see, the data I mentioned just now was not a sequence of letters; it was indeed a sequence of DNA that had certain base pairs (what we call them).
What happens if we make a tiny mistake while copying the code?
Well, sometimes you might have to be very careful when you copy genes because you may have to edit and delete them as well. We could easily miss some logic code or something like that. If a gene is edited and mutated, then the organism that it is supposed to be part of might not be able to survive.
It could mean something similar in the case of artificial intelligence. Suppose someone makes a mistake while encoding their AI, then we may not have access to the information required to correct it. In that case, their AI would outsmart us and become uncontrollable which might lead to its own demise (since we could not prevent it from growing beyond its intended capacity).
Even if we create an error in our AI (which may be very possible), it will still be very hard for us to achieve with our current technology because of the sheer amount of data necessary for programming an intelligent system.
An error in the process of converting DNA data into a program (or vice versa) might lead to something different than what was intended. In that case, the proto-person may not be able to survive. But if it survives, then it could be very different from what we had intended.
So the question is, can such errors occur in AI? Well, that is a tricky question indeed. And this mostly depends on how much knowledge we have about AI, (also AAI if produced) and its “DNA” (the raw material used for creating an AI). It also depends on “what level of error” and occurs at which (near or farther) side of the future.
- AI-Powered PCs: Overhyped Trend or Emerging Reality? - August 21, 2024
- Princeton’s AI revolutionizes fusion reactor performance - August 7, 2024
- Large language models could revolutionize finance sector within two years - March 27, 2024