As our future with Artificial intelligence is going to become inevitable, let’s talk about our possible future of AI, and whether it is going to help us or harms us. If a robot tries to make a cup of coffee and spills it all over, I think that would be bad.
The advancement of Artificial Intelligence is growing faster than ever. Recently, researchers have developed an AI capable of predicting crimes one week earlier with 90 percent accuracy. The researchers at the University of Chicago have developed an algorithm. It first looked at Chicago – where crime spiked by 34% this year thus far versus 2021. Then, they applied the tool to seven other major cities, including Atlanta.
Likewise, scientists from the University of Tokyo have developed a skin-equivalent coating for robotic limbs, with the resulting tissue-engineered material boasting water-repellant and self-healing properties. They have developed the “living” skin for a robot. What’s next? The next is the future.
From tech-savvy assistants to agricultural bots and self-driving cars, these innovations seem like a welcomed addition to human life. However, the thought of using robots for anything besides entertainment is often met with fear and suspicion.
In reality, though, some many potential benefits and harms could be caused by an intelligent robot’s interaction with humans that cannot be overlooked.
In the past, we could create robots that were powerful enough to assist us with chores, but we never developed an accurate and efficient way of communicating with them.
Our Future with AI: Will it harm us or help us?
We do not want a future where humans have no roles to play and are obsolete as they stare down while their robotic counterparts carry out the tasks they previously performed.
Ultimately, these intelligent robots will have to learn how to cooperate with us for us to develop artificial intelligence that can be trusted and followed simultaneously.
It’s been said if you don’t believe in the future, then it won’t happen. The future is already here. Today. We live in an era of robotics and artificial intelligence(AI). And we are just beginning to see their potential applications, including in the field of medicine.
From diagnosing illness and injuries to performing surgery, robots have the potential to be life-changing for patients around the world.
We can say the same for an AI. When combined with robotics, AI can help to enhance the abilities of a robot and make life much easier. For example, we can use AI to recognize patterns in the research data that robots are taking in. It will analyze the research data more quickly and efficiently than a human without missing any details.
Will robots harm us?
Some people believe that these technologies will be harmful to humans. Others believe they will make our lives easier. What do you think?
Which of the following statements best represents your opinion on how AI and robots will benefit or harm human life in the future? Why?
(a) Robots and AI may harm human life by taking away jobs from humans, hurting human feelings, encouraging laziness, and making harmful decisions for us.
(b) Robots and AI may benefit our lives by helping us with work, possibly making lives easier, and protecting us from bad decisions.
(c) AI is not coming out of the machine and is starting to kill us anyway.
Now, these statements may be alluding to different issues, but I would like to tackle them all together. The gist is that robots and AI will at some point harm us.
Now, why is this fear so pervasive? And where does it stem from?
To begin with, it is important to consider what AI is. Artificial intelligence or AI comes to represent the concept of non-biological systems that are capable of performing tasks often associated with human intelligence. We have seen a lot of movies in which robots turn out to be really bad guys, causing mayhem and destruction all around.
For example, Hal 9000 – 2001: A Space Odyssey, Roy Batty – Blade Runner, Skynet – Terminator. Unicron – Transformers, Sentinels/Machines – The Matrix, are some of the movies in which robots turn out to be really bad guys.
But in reality, this is not how it’s going to work. Intelligence will be limited to machines, forever.
“Robots will not harm us” is already true, but will it remain that way? What is needed here to ease our worries is to set a concrete definition of robots and AI. Then we need to establish the boundaries in which they operate.
The first idea that pops up when one thinks of AI is a humanoid robot that automatically does whatever “we” want it to do. While this may sound fanciful, it is not impossible with current technology. And, it will be more realistic as time goes by. So where can we draw the line between reality and fantasy?
Well, just take Nuclear weapons
Take nuclear weapons for example. As of 2022, about 12,700 nuclear warheads are still estimated to be in use, of which more than 9,400 are in military stockpiles for use by missiles, aircraft, ships, and submarines. We have made them and they can cause a lot of harm. But have they?
Have we had giant robots attacking us in the past? No. And I don’t think it’s going to happen either. The Robots will help us till the farthest and not harm us if everything goes right with the timeline of AI development.
Why AI will not start destroying everything on its own?
Why we should worry about the fact that humans can misuse AI? Nature has always put limits on everything. Just like humans cannot exceed the speed at which a cheetah can run(A cheetah can run up to 120 km/h. It runs faster than any other animal on the planet. The acceleration of a cheetah, 0-100 km/h in just three seconds, is just as incredible. It reaches its maximum speed in short bursts.), or how well a bird can fly(The Peregrine Falcon is indisputably the fastest animal in the sky. It has been measured at speeds above 83.3 m/s (186 m/h), but only when stooping, or diving.)
So, let’s take a step back. We must know that AI is not going to harm us because it has limitations and it will always remain that way.
These are the limits in which AI operates:
A) Artificial Intelligence is not an entity or a thing but an ensemble of computer programs.
“It’s basically software that we create with various capabilities”. It is a collection of a lot of programs running concurrently to perform different tasks using different algorithms.
B) These programs are designed to operate within a given environment. This environment could be any way you want it to be, as long as it doesn’t go beyond the boundaries you set for your program(s).
So, what happens when we program our AI to carry out a task using an algorithm that is beyond its capabilities? Before that, you must understand that creating a brain that can create better brains on its own is not in the range of capabilities of a human brain. Rewind that again!
What if the process goes wrong? Will AI turn rogue?
There always remains a possibility that our future timelines of AI development go wrong. And this is where the trouble starts. The future of AI, when developed in an unchecked manner could reach levels that are not only harmful but potentially dangerous for the human race.
A very popular interpretation of advanced AI is one “extremely intelligent”, and decides it doesn’t need humans to complete its goals. It could be a super-intelligent AI or machine, or just a swarm of nanobots that combine and self-replicate to form an intelligent swarm that makes its own decisions.
If such AI is born as a result of an error, we will have no control. We can’t stop it. So, what could happen?
There is a possibility of a self-replicating super-being that decides it does not need humans anymore. This could be bad news for us, but again this is not impossible.
But there are other 90 possibilities which are bright. 90 times out of 100, it seems that our future with AI is bright. Most likely, AI will be more like an infinite version of Amazon Alexa, rather than a physical terminator.
- AI-Powered PCs: Overhyped Trend or Emerging Reality? - August 21, 2024
- Princeton’s AI revolutionizes fusion reactor performance - August 7, 2024
- Large language models could revolutionize finance sector within two years - March 27, 2024