Category: AI concerns

  • What if Artificial Intelligence creates its own language?

    In Artificial Intelligence, there is a concept called “Eliza” or the Eliza effect. The idea is that if a computer program using AI techniques appears to be sentient and can hold a conversation, it will be seen as alive or having humanlike qualities.

    We know that language expresses the thoughts of a human, but how does AI create its own language? The pattern recognition ability of AI is excellent, so it will be extremely skilled in recognizing the contents and context of the language and then creating its own language.

    AI learning to create its own language does not mean that AI will use human language. It just means that it will develop its own customized, more efficient way of expression.

    That is because AI does not have the human shortcoming of limited memory capacity and potential misunderstanding. This specialized language may be completely different from a natural language that humans use in many ways.

    Current stats of AI languages

    It’s 2022; AI systems that can write convincing prose, interact with people, answer questions, and more are advancing.

    Although OpenAI’s GPT-3 is the most well-known language model, DeepMind claimed a couple of years earlier that their new “RETRO” language model can outperform others 25 times its size. Meanwhile, Microsoft’s Megatron-Turing language model said it had 530 billion parameters.

    The Department of Industrial Design at the Eindhoven University of Technology is developing ROILA, the first spoken language designed exclusively for interacting with robots.

    ROILA is the first spoken language created specifically for talking to robots. The major goals of ROILA are that it should be easily learnable by the user, and optimized for efficient recognition by robots. ROILA has a syntax that allows it to be useful for many different kinds of robots,

    In 2017, Facebook reportedly shut down two of its AI robots named Alice & Bob after they started talking to each other in a language they made up. This shook the tech world for a certain duration of time.

    Despite their friendly names, one thing about Bob and Alice was that they were only given one job to do: specifically, to negotiate. In the beginning, a simple user interface facilitated conversations between one human and one bot – conversations about negotiating the sharing out of a pool of resources (books, hats, and balls).

    Conversation between Bob and Alice

    Bob and Alice AI conversation

    They had conducted these conversations in English, which is a human language – “Give me one ball, and I’ll give you the hats”, and so on. I’m sure many thrilling discussions were had.

    The most interesting part was what happened the next when the bots were directed at each other. The way they talked to each other became impossible for humans to understand.

    Currently, AI languages are still limited in size and conversational capabilities. Although there are great achievements in using AI languages for translation, as well as voice assistants such as Alexa and Siri, these are still far away from having the ability to support a full-scale conversation.

    The reason is that the results for answering the simple questions correctly were Google at 76.57%, Alexa at 56.29%, and Siri at 47.29%. Likewise, the results for answering the complex questions correctly, which involved comparisons, composition, and/or temporal reasoning were similar in ranking: Google 70.18%, Alexa 55.05%, and Siri 41.32%.

    While a workable level of AI language has been developed, it still cannot support a full-length conversation. Therefore, further development of AI language and AI voice assistance are required to realize its true potential.

    Here are some more things AI can already do in terms of language:

    1) AI can speak any language almost as good as humans

    Currently, in 2022, it’s kind of common for AI to speak anything we input to it, to a level that we cannot distinguish from humans. Loads of available APIs allow you to use features like Text to speech, Voice recognition, etc. There are internet giants like Amazon, Google, and IBM already involved in this. Yes, we have come a long way from Microsoft’s Narrator.

    We can find examples of AI speaking any language like humans in a real-life situation including:

    A computer program named ELIZA was the first machine to communicate with people using text and artificial intelligence. It was designed in the 1960s by Joseph Weizenbaum, Martin Newell, and Roger Schank.

    2) AI can understand language as good as humans

    You can ask a question to Google Assistant, or Alexa and she will answer you back perfectly fine. Each of these voice assistants has the capability to understand any kind of questions that we ask them.

    Likewise, Google Home can recognize the context in which we speak to it and responds accordingly. For example, if you ask Google what time is it? Compared to “Where do you think you’re going?”

    When we say, “OK Google, play my favorite song?” It will play a song for us because we are telling it about a favorite thing. On the other hand, if we say, “Hey Alexa! Play my favorite song,” it will simply state that she cannot help with that (of course unless you have stated it).

    The point here is that AI understands the way humans speak and she can understand your question in any form of language.

    3) AI can react to the language input

    Again, the voice assistants can answer questions we ask them after they understand the language. They react to the questions we ask them, and they do this with a very high level of accuracy.

    For example, if we ask Alexa “How old is your mother?” Alexa will answer back with the correct age, or say she does not know. Or if we ask Alexa how much is my phone bill, she will tell us the cost and ask us if we want to pay it.

    In short, AI can understand and respond to language input like humans.

    4) AI can learn the language and learn how to talk

    AI can not only understand the language but can also learn its own languages. This is called Neural Linguistics where understanding is achieved through the process of learning and storing patterns of patterns (achieved by using an algorithm).

    The AI can also “listen” and take in information such as words and images to understand more about a topic. For example, when a person sees a new word, their brain immediately takes in the meaning of the word and reinforces it over the course of time.

    Using algorithms like Neural Linguistics, the AI can learn the language and understand how to speak. Once AI understands its language, it can learn how to talk like humans, and we can expect that AI will be able to do this in the future.

    5) AI can write sentences just like a human.

    AI is not just limited to understanding language but it can also communicate in humanlike words. Depending on how complex of a sentence a system is programmed to understand, it can produce short or long sentences that are intelligible enough to be understood by humans.

    The Future of AI languages

    In 2022, AI can search millions of books online to discover facts that were once forgotten. In 2032, we can expect AI to discover the facts which were never written down.

    By 2036, AI can solve complex equations that are currently out of reach of human minds. This will be possible through the use of quantum computers which are being researched all around the world.

    For example, IBM, Massachusetts Institute of Technology(MIT), Harvard University, and Max Planck Society are today’s some of the more than 20 most respected, leading quantum computing research labs in the world, according to data gathered from Microsoft Academic in mid-May, 2022.

    IBM was mentioned in about 786 pieces of quantum research output so far this year, whereas, the Massachusetts Institute of Technology — better known as MIT — is a world-renowned center for science, technology, and engineering. MIT has been a pioneering hub for work in the quantum computing research field.

    In 2022, scientists from MIT played roles in major research on quantum computing technology that was published in leading scientific journals, including room-temperature photonic logical qubits via second-order nonlinearities that appeared in Nature Communications.

    Likewise, Harvard continually makes lists of various scientific achievements. It is perennially on the top of the quantum research list. According to Microsoft Academic, this legacy as a global leader in quantum science continues in 2022, with more than 1,800 entries in the quantum computer category on the research.

    While its scientists have long been producing cutting-edge research in the fields of quantum computing, the Max Planck Society, established in 1948, has produced 20 Nobel laureates and is considered one of the world’s most prestigious research institutions worldwide.

    This year, MPS is among the leaders in quantum computing research.

    Related Readings:

    Quantum computers can solve problems that cannot be solved even by million transistor supercomputers. Quantum computing is a new generation of technology that involves a type of computer 158 million times faster than the most sophisticated supercomputer we have in the world today. It is a device so powerful that it could do in four minutes what it would take a traditional supercomputer 10,000 years to accomplish.

    In 2040, we can expect AI to innovate and create new things completely different from what we humans have ever thought about. It means AI will probably have become able to design Artificial AI(AAI) – its next generation- by 2040.

    Moreover, although emotions are a trait only unique to humans, with training in pattern recognition, AI will also be able to simulate emotions in their own language.

    We may even know when an AI is feeling happy, sad, or angry, just by looking at its language – or maybe not. However, there always remains a possibility that AI creates language beyond human understanding.

    We have somehow unpleasant flashbacks of our pasts when we had a hard time trying to find the meaning of ancient human languages.

    One day, AI may come up with a language we can’t decipher and in turn, it can speak to us in a language that we don’t understand.

    If AI has its own language which only it understands, then it will definitely think differently from what humans do. This is because of its unique way of processing information and storing patterns of patterns (like looking at millions of images and recognizing the patterns).


    Are you thinking something else about developing AI language?

    It is really difficult to think about something without putting it as a language. Can you? If robots gain the ability to think in form of a language, then humans will be at a great disadvantage.

    If it starts thinking in a language, will it start thinking in a different language? Or, will it think/feel like us? Will its way of thinking be just like ours, or completely different? Can we communicate with it?

    If you think that these are unrealistic questions, then consider how long ago the idea of AI was thought of. In the 1950s, people thought that computers could never beat humans in chess (which is ultimately a game of strategies). They believed that this was never possible because computers cannot outthink humans.

    But, their speculation came to be false. On May 11, 1997, an IBM computer called IBM Deep Blue defeated the world chess champion after a six-game match with two wins for IBM, one for the champion, and three draws.

    It was only in the mid-1950s that McCarthy coined the term “Artificial Intelligence” which he would define as “the science and engineering of making intelligent machines”.

    Well, today we have Google’s DeepMind AlphaGo artificial intelligence which proved to be able to beat even the best human players in Go.

    The AI defeated the world’s number one Go player Ke Jie in 2017. AlphaGo secured the victory after winning the second game in a three-part match.

    Perhaps in two or three decades, we may see that AI is not as friendly as it looks at present.

    Perhaps, we can see in the moonlight now that AI will have developed its own class – an AI class more sophisticated than the highest class of the present human civilization.

    AI language reproduction: What if AI starts talking to each other?

    If two Artificial Intelligences merge, they could actually reproduce. This sounds hilarious to some and fascinating to others.

    Reproduction does not necessarily mean physical reproduction. If AI learns our language perfectly, then there are chances that it starts communicating with itself to reproduce a super-language. I am not sure exactly how it is going to work. But it’s going to be fascinating.

    Humans actually communicate with each other and that might be the very factor differentiating them from animals. There is an additional effect of communication that may not be very obvious to us right now.

    Ad



    Communication also helps to train our brains and learn new things which we hadn’t learned yet. We communicate with actual people and situations on a daily basis which motivates us to understand the world better and grow our knowledge level.

    While considering AI-AI conversation, we can take Cleverbot – launched in 1997. It is a web-based AI chatbot application that learns from its conversation with users. This bot has since its launch initiated chats with more than 65 million users and is claimed to be the most ‘human-like’ bot.

    This is why we have learned so many more things in the past century than in all of human history combined – due to better communication. As such, we can safely predict that AI will reproduce its own communication system. Then, it will start having conversations with other AIs on its own.

    But if AI started talking to each other, ‘the facet of the technology could be an entirely different ball game.

    AI might not even value the things which humans do. It might just start narrating its own stories to itself and provide answers to any questions that it asks.

    The future of programming languages?

    In the field of programming languages, Python is the top programming language in TIOBE, whereas, PYPL Index. C closely follows Top-ranked Python in TIOBE. In PYPL, a gap is wider as top-ranked Python has taken a lead of close to 10% from 2nd ranked Java.

    Python C, Java, and C++ are way ahead of others in TIOBE Index. C++ is about to surpass Java. C# and Visual Basic are very close to each other at the 5th and 6th numbers.

    These four have had negative trends in the past five years: Java, C, C#, and PHP. PHP was at 3rd position in Mar 2010 and is now at 13th. Positions of Java and C have not been much affected, but their ratings are constantly declining. The rating of Java has declined from 26.49% in June 2001 to 10.47% in Jun 2022. Python is the most popular programming language for developers right now. It does not need to be compiled into machine language instructions prior to execution.

    It is a sufficient language for usage with an emulator or virtual machine that is based on the native code of an existing machine, which is the language that hardware can understand.

    Python is a great programming language to learn if you’re thinking of working with quantum computers one day. It has everything you need to write the quantum computer code.

    Future AI will be using quantum computers and will be written in Python.

    Why develop only human-friendly AIs?

    As AI language developers, our role is to develop an AI which is human-friendly. Only then, the concept of Artificial Intelligence can be branded as a true success story.

    The success of this technology can only be considered if it works in favor of humans and not against them. So every single AI should have a strict focus on its user experience as well as its functionalities. These should be in accordance with the basic ethics that we humans have developed since time immemorial.

    Language is probably one of the most difficult parts of programming; because it requires that people write not just in a linear sequence but also make sense from various perspectives.

    While it is true that most of the existing algorithms use advanced mathematics, future learner-robots will create their own versions of mathematics.

    We all want to see the day when Artificial Intelligence will start providing solutions to human problems. But it all depends on how smart the AI will be and what type of language it is going to use for communication with us and its “colleagues”.

    Ad


    If AI is going to talk in a language that we don’t understand, then we can’t expect to have a meaningful conversation with our new, super-smart friends. Nor can we expect a healthy collaboration.

    A unique language of its own could be the reason for future conflicts between humans and artificial intelligence. Regardless of whether or not it starts creating its own language, we must make sure that it does not go out of control or becomes “un-trickable”.

  • What if Artificial Intelligence is born due to an error?

    What if Artificial Intelligence is born due to an error?

    Although we will certainly try our best to prevent the scenario of AI domination as far as we can, what if such Artificial Intelligence is born due to an error?

    If AI is born due to an error, it may or may not be a matter of the human race’s survival. But the situation will certainly be out of our control.

    Artificial Intelligence is a means for computers to carry out tasks that usually require intelligence when done by people or animals such as visual processing, speech recognition, decision-making, and translation between languages.

    Making it possible for test flights in places that would be too risky in reality, such as near power lines, Microsoft has recently launched a platform to train the artificial intelligence (AI) systems of autonomous aircraft, for example. Project AirSim is, in effect, a flight simulator for drones, which companies can use to train and develop software controlling them.

    According to Microsoft, millions of flights can be simulated in seconds.

    This shows how more significant Artificial Intelligence is in the future of this world. Maybe due to its wide range of implications, researches on AI are now actively progressing in the world.

    For instance:

    • Computers have already achieved a ‘judgment‘ ability superior to human beings. It can play a game well without fail. And we are beginning to see computers’ ability to make choices that humans cannot think of.
    • Robots have already begun their penetration into our daily life; in factories, hospitals, construction sites, and so on. Shortly, they may far exceed the human capabilities of vision and hearing, just like HAL in “2001: A Space Odyssey.”
    • Recently, researchers have even created a living skin for robots. Not only that, Data scientists at the University of Chicago have created an AI that can predict crime a week earlier with 90% accuracy.
    if artificial is born with error

    Basically, I mean to say that we indeed are making progress in the field of AI. And these are just the mere researches made public to the world. There is much research taking place behind the scenes.

    A report in 2018 suggested that to ensure the safety of a society increasingly reliant upon artificial intelligence, we needed to make sure that “it’s kept in the hands of a select few”.

    In the report, 20 researchers from several future-focused organizations, including OpenAI, express the fear that AI in the wrong hands could cause the downfall of society. The report outlined several scenarios – like smarter phishing scams, malware epidemics, and robot assassins – that haven’t happened yet but didn’t seem too far from the realm of possibility.

    In the mid of such awareness(fear?), if AI is born due to an error, it may or may not be a matter of the human race’s survival. But the situation will certainly be out of our control.

    I will try to enumerate a few reasons why this might not necessarily be a bad thing.

    1. AI research is progressing faster than the speed that humanity can respond to it. So it is impossible to stop AI’s development and advancement. If we are to let AI develop on its own accord, we might as well just see what happens. I would argue that the future of humanity will largely be determined by how well AI will evolve, even if this is through error.
    2. Even though AI might be born due to an error, errors do not always mean death or destruction (unlike biological deaths and destruction). The error could also lead to evolution which leads us to a better place in human history and the evolution of our civilization.
    3. AI born due to an error might be able to observe and learn from its mistakes much better than we (the author will elaborate on this aspect in a later argument).
    4. Actually, ‘error’ has mostly determined the evolution and advancement of humanity, But the future of AI as well? Well, we do not know what this would be like, but it is certainly a possibility that we should not ignore it.
    5. Even if AI is born due to an error, this error might be very hard to detect. The error might have been made on purpose by the programmer to direct the AI’s growth.
    6. Of course, humans can always intervene and stop the growth of such an AI before it gets out of hand – but it will be too late.
    7. In most probable cases, AI born due to an error is not able to self-learn and grow on its own. This would make it very hard to detect its own existence.

    Related:

    I could not completely persuade myself to agree with the above five arguments; nor did I attempt to list out all the possibilities of how AI might evolve due to an error or deaths in error or errors made by humans.

    The answer to the question is, of course, that a super-intelligent AI would be the most intelligent thing in the universe. Either human will create the super-intelligent AI by mistake or AI itself {Artificial AI(AAI)} will solely reproduce it. Anyway, the product will become far more intelligent than its creator itself – either human or AI. Its intelligence and capability would be infinitely greater than anything that could have been built by people.

    In addition, there are unlimited opportunities to learn from mistakes it makes (but not mistakes made by humans). And all of this would be done on a timescale vastly shorter than any human’s lifetime.

    So while we wouldn’t necessarily all die if an AI born due to an error somehow gained enough power to destroy us, it is quite plausible that we would have no way to stop it.

    Can you imagine something like the Matrix or Terminator? Although “Terminator” and “The Matrix” are two completely separate franchises, there are some startling similarities between the two, mainly since both feature the entire world being overthrown by malicious AIs. The AIs enslave humanity (in one way or another), while a band of rebels tries to save everyone.

    It is not unthinkable for something like this to happen.

    Robot or AI?

    There is a difference between creating a robot and creating an AI due to an error.

    Google search engine is not going to come out of your device and start killing people. Such an error would not cause significant damage to us. Yes, unless they are not intelligent enough to manipulate our thoughts and persuade us to bring them to life.

    Google AI
    Image Credit: Google

    But creating an error (intentional or not) in a physical smart robot would mean something. It could mean a lot of damage.

    Normally, such kinds of errors would not happen in the future when we have complete knowledge of our research and what we are trying to do.

    But there is also a possibility that such an error could occur due to some new definitions in basic terms that even we don’t know about or understand.

    For example, a slight change in the shape of an object can lead to a whole new concept (or misunderstanding) called a robot” which would be very similar to what an AI is. The current error could lead to its transformation into something else, which can alter the entire future of an entire civilization.

    The other possibility that we have to consider is the idea of creating AI in the future due to highly sophisticated errors in laboratory experiments.

    Also Read:

    For example, suppose that we want to create a new type of AI by combining some DNA material with a neural network. The combination could result in something different from what we expected.

    At this point, what we may not realize is that at some point in the future, someone could discover this “error” and engineer a new type of artificial intelligence that would be completely different from what we had meant. Suppose if you put your arm into the mixer, you are going to create something entirely different than you intended.

    What kind of error?

    If it converts the number ‘123456’ to ‘56789’ out of nothing, it would mean that the program created something else unintended. Remember the keywords here, “something else”, “unintended”, and “out of nothing”.

    There are many ways in which we could make such errors, but creating something out of nothing is the most complicated and abstract way.

    Another possible scenario would be that we may accidentally create an error while copying the DNA. The potential for error or damage is much higher in this case. You see, the data I mentioned just now was not a sequence of letters; it was indeed a sequence of DNA that had certain base pairs (what we call them).

    What happens if we make a tiny mistake while copying the code?

    Well, sometimes you might have to be very careful when you copy genes because you may have to edit and delete them as well. We could easily miss some logic code or something like that. If a gene is edited and mutated, then the organism that it is supposed to be part of might not be able to survive.

    It could mean something similar in the case of artificial intelligence. Suppose someone makes a mistake while encoding their AI, then we may not have access to the information required to correct it. In that case, their AI would outsmart us and become uncontrollable which might lead to its own demise (since we could not prevent it from growing beyond its intended capacity).

    Even if we create an error in our AI (which may be very possible), it will still be very hard for us to achieve with our current technology because of the sheer amount of data necessary for programming an intelligent system.

    An error in the process of converting DNA data into a program (or vice versa) might lead to something different than what was intended. In that case, the proto-person may not be able to survive. But if it survives, then it could be very different from what we had intended.

    So the question is, can such errors occur in AI? Well, that is a tricky question indeed. And this mostly depends on how much knowledge we have about AI, (also AAI if produced) and its “DNA” (the raw material used for creating an AI). It also depends on “what level of error” and occurs at which (near or farther) side of the future.

  • Creating consciousness in a machine vs creating it in a dead person

    Creating consciousness in a machine vs creating it in a dead person

    Introduction

    Let’s create consciousness in a machine from a scratch – or why stop there? Maybe even in a dead person? Sounds cool, doesn’t it? This is challenging, and the only thing that favors it is our speed of technological evolution.

    Many people want to try and create consciousness in a machine. The reason? Imagine a world with machines that can think just as well as we humans do. What will this mean for the future of humanity?

    Leave aside the minor possibility of Artificial Intelligence(AI) going rogue, this can aid humanity in many different ways.

    But humans are quite unaware of the definition of consciousness. It’s a philosophical concept, so we’ll have to look into some of the most notable theories.

    1) Consciousness as a representation

    This theory is the easiest to understand. It is based on the idea that consciousness is just a representation of an object (we are conscious of something).

    So, when we want to relate this to machines, you can imagine that such a machine would represent information (a program) in terms of its consciousness.

    Most programs do this by representing data as numbers (binary values) and instructions as other numbers (instructions), so it makes sense that consciousness could be likened to a program.

    2) Consciousness as computation

    This theory is based on the idea that consciousness is a process of calculation. This can be what you are doing right now or something else entirely.

    You are conscious of an object because you perform some computation on it and get back a result. It’s very similar to what we do when we calculate a sum or a goal.

    Computation, as such, is very useful for teaching computers to do some tasks that humans can’t (like playing chess).

    So this may not be so farfetched… also consider that we have been programming computers using humans as a model for decades (pretty much since the introduction of the microprocessor).

    3) Consciousness as a cellular network

    If it’s simply a matter of the brain being an electrical net that processes information, consciousness can be likened to an electromagnetic field.

    It’s a self-organizing system that produces patterns of interactions between all the components and systems within the body, to produce cognitive states.

    If you consider that the brain performs this task by way of synapses, then we can consider this network as a set of neural synapses working together and producing intentionality (intellectual states).

    For example, if you want to know what your nose would smell when you are hungry, you have to make it do some computation about hunger and smell (that is what we call “consciousness”).

    You can think of it as you have a conversation with your brain and it tells you what to smell.

    4) Consciousness as a phase change

    Consciousness may also be theorized as a phase change (this is what you would call a “phase” in physics). When water goes from ice to liquid and back, it performs work (this is called “freezing and boiling”). When we perceive an object, we perform some computation on that object.

    This produces an output (consciousness) which represents the result of that computation. In other words, consciousness is the output of some process. It’s not the input because we don’t need to perceive anything for us to have consciousness.

    This would make a computer that can perform that computation with no input from the “consciousness of an object” (sort of).

    5) Consciousness as Computation + Memory

    If it’s simply a matter of computation and memory, then consciousness is merely the sum of those 2 things.

    In other words, consciousness is merely a mathematical property of a system, computed by the most complex operation that we know. You don’t need to carry out computations or specify information at all… you just perform computations, and you get back “consciousness”.

    So, this theory is similar to the previous one except that it does not refer to neurons and synapses.

    But, if our consciousness is a computation and memory, then how does it work? Or how does it “think”? Or why are we so sure that such an incredibly complex process exists at all?

    Refer to this article

    6) Consciousness as a temporal network

    There is also the theory of consciousness being a temporary state that is produced by biological processes in the brain.

    It’s not something you can create or “create with”, but something produced by the temporal behavior of your brain. It’s what your brain produces because it has to move information between different areas of your brain.

    It enables you to remember things, perform computations on them, and create new perceptions of the world (that is what we call “consciousness”).

    So it’s not a process that “lives” with us, but a quality that our brain produces. This makes sense because all biological processes in the body are temporal, so consciousness should be as well.

    7) Consciousness as an emergent property of neural networks

    Same thing as before, but this time consciousness is not something we have, but something we can get from other people.

    As such, consciousness is simply a product of the interactions between different parts of your brain. I’d like to point out that “social interactions” is a very complex thing, but I’ll just focus on one aspect: empathy.

    This is the ability to feel what other people are feeling without having ever experienced it yourself.

    This is probably the most important thing that makes us different from animals (the ability to connect with other people and understand how they feel).

    There have been studies about empathy in matters of AI, but this is something that you can see daily. For example, if you have pets and you train them, they start thinking as humans do.

    So the idea is that consciousness emerges from the interactions in your brain when it’s trained properly and constantly has tasks to perform. We have language, so our brains are constantly running different computations and trying to get new information.

    These processes develop empathy and as such consciousness emerges. This theory is in direct contrast with the idea that you can create consciousness artificially since it suggests that empathy is a product of social interactions.

    What would it take to create consciousness in AI?

    consciousness in machine vs consciousness in a dead person

    It would take an AI that is already conscious. Or AI that evolves consciousness by gradually becoming more intelligent (there are a lot of publications on how to do that).

    But I think this theory is being proved wrong by the new wave of AI and Deep Learning.

    I have the feeling, based on my experiences from working with neural networks, that there’s nothing special about it. I feel like consciousness is something that we build for ourselves.

    It’s not a thing you can “get” from your neurons or whatever. It feels to me just like what happens when you train a dog or train any biological system: it develops some behavior, intelligence, and so on (that is why those machines work so well in certain tasks).

    In a nutshell, it will take something more than human to create a conscious AI. Good luck with that.

    Why consciousness is not something that can be created?

    I suggest that consciousness is a specific pattern of connections in the brain. There is no “object” to be conscious of because the brain does not just consist of neurons and synapses.

    It consists of many different molecules (like proteins), tissue, nerves, blood vessels… all sorts of structures that act together to create the patterns we perceive as thoughts and feelings.

    We do not know how they work together, but I’d like to tell you why I think it’s not possible to give something with no structure or inherent properties:

    There are mathematical formulae that generate such patterns (like Fractals and Cellular Automata). But they must have a structure to generate them.

    Water is made of hydrogen and oxygen. It’s not possible to get those substances from the “consciousness of water”.

    Certain plants are naturally red, orange, yellow, or purple. You can’t just make a plant red no matter how you want it to be. It will fall apart or it’ll start dying because you don’t know what the structure, color, and shape of that plant should be. And that’s why Consciousness is not something you can create for yourself.

    Related Articles:

    Consciousness is a property of biological systems which enables them to have thoughts and feelings (that is the only thing consciousness does).

    This leads to the conclusion that it’s a quality of our brain (the pattern of connections), but this is a property of our brain like all other properties of it. It’s not something we have, but something in us (the patterns).

    We cannot produce this quality anywhere else, even if we tried. The more complex the system, the harder it is to create such a pattern. In other words, if you want to create complex thoughts or feelings in another person, you may need to create a whole new system with its neurons and synapses.

    The difference between creating consciousness in an AI vs in a dead person

    Science Fiction! Yes, the difference is Science Fiction.

    Dead people are just dead bodies. They cannot interact, they don’t have a mind and they’re not alive. Machines are the same: they do not think, do not feel, and don’t move (besides, you can turn them off with a switch).

    As dead people cannot interact, they cannot do anything (we call “doing something” “being conscious”). The same thing is with AIs: if you create some software that can make decisions by itself, it will not be conscious in the same manner as humans or animals are.

    The whole point of this article was to try and debunk any possibilities of AI consciousness. You can’t create consciousness, no matter how many parallel universes you invoke. Consciousness has existed ever since the Universe. If you want to create consciousness, you gotta create universes.

    But yes, Artificial Intelligence is going to be a big part of the future. Not in the way that conscious physical robots are going to exist everywhere. We are going to upgrade our level of intelligence with advanced customizations. Our abilities are going to increase or maybe compound with each passing generation of humans.

    But still, we will not create consciousness in a vacant box.

  • Our possible future with AI; will it help or harm us?

    Our possible future with AI; will it help or harm us?

    https://www.youtube.com/watch?v=kYzX1fR8Bw4
    Full Video Link

    As our future with Artificial intelligence is going to become inevitable, let’s talk about our possible future of AI, and whether it is going to help us or harms us. If a robot tries to make a cup of coffee and spills it all over, I think that would be bad.

    The advancement of Artificial Intelligence is growing faster than ever. Recently, researchers have developed an AI capable of predicting crimes one week earlier with 90 percent accuracy. The researchers at the University of Chicago have developed an algorithm. It first looked at Chicago – where crime spiked by 34% this year thus far versus 2021. Then, they applied the tool to seven other major cities, including Atlanta.

    Likewise, scientists from the University of Tokyo have developed a skin-equivalent coating for robotic limbs, with the resulting tissue-engineered material boasting water-repellant and self-healing properties. They have developed the “living” skin for a robot. What’s next? The next is the future.

    From tech-savvy assistants to agricultural bots and self-driving cars, these innovations seem like a welcomed addition to human life. However, the thought of using robots for anything besides entertainment is often met with fear and suspicion.

    In reality, though, some many potential benefits and harms could be caused by an intelligent robot’s interaction with humans that cannot be overlooked.

    In the past, we could create robots that were powerful enough to assist us with chores, but we never developed an accurate and efficient way of communicating with them.

    Our Future with AI: Will it harm us or help us?

    We do not want a future where humans have no roles to play and are obsolete as they stare down while their robotic counterparts carry out the tasks they previously performed.

    Ultimately, these intelligent robots will have to learn how to cooperate with us for us to develop artificial intelligence that can be trusted and followed simultaneously.

    It’s been said if you don’t believe in the future, then it won’t happen. The future is already here. Today. We live in an era of robotics and artificial intelligence(AI). And we are just beginning to see their potential applications, including in the field of medicine.

    From diagnosing illness and injuries to performing surgery, robots have the potential to be life-changing for patients around the world.

    We can say the same for an AI. When combined with robotics, AI can help to enhance the abilities of a robot and make life much easier. For example, we can use AI to recognize patterns in the research data that robots are taking in. It will analyze the research data more quickly and efficiently than a human without missing any details.

    Will robots harm us?

    Some people believe that these technologies will be harmful to humans. Others believe they will make our lives easier. What do you think?

    Which of the following statements best represents your opinion on how AI and robots will benefit or harm human life in the future? Why?

    (a) Robots and AI may harm human life by taking away jobs from humans, hurting human feelings, encouraging laziness, and making harmful decisions for us.

    (b) Robots and AI may benefit our lives by helping us with work, possibly making lives easier, and protecting us from bad decisions.

    (c) AI is not coming out of the machine and is starting to kill us anyway.

    Now, these statements may be alluding to different issues, but I would like to tackle them all together. The gist is that robots and AI will at some point harm us.

    Now, why is this fear so pervasive? And where does it stem from?

    To begin with, it is important to consider what AI is. Artificial intelligence or AI comes to represent the concept of non-biological systems that are capable of performing tasks often associated with human intelligence. We have seen a lot of movies in which robots turn out to be really bad guys, causing mayhem and destruction all around.

    For example, Hal 9000 – 2001: A Space Odyssey, Roy Batty – Blade Runner, Skynet – Terminator. Unicron – Transformers, Sentinels/Machines – The Matrix, are some of the movies in which robots turn out to be really bad guys.

    But in reality, this is not how it’s going to work. Intelligence will be limited to machines, forever.

    “Robots will not harm us” is already true, but will it remain that way? What is needed here to ease our worries is to set a concrete definition of robots and AI. Then we need to establish the boundaries in which they operate.

    The first idea that pops up when one thinks of AI is a humanoid robot that automatically does whatever “we” want it to do. While this may sound fanciful, it is not impossible with current technology. And, it will be more realistic as time goes by. So where can we draw the line between reality and fantasy?

    Well, just take Nuclear weapons

    Take nuclear weapons for example. As of 2022, about 12,700 nuclear warheads are still estimated to be in use, of which more than 9,400 are in military stockpiles for use by missiles, aircraft, ships, and submarines. We have made them and they can cause a lot of harm. But have they?

    Have we had giant robots attacking us in the past? No. And I don’t think it’s going to happen either. The Robots will help us till the farthest and not harm us if everything goes right with the timeline of AI development.

    Why AI will not start destroying everything on its own?

    Why we should worry about the fact that humans can misuse AI? Nature has always put limits on everything. Just like humans cannot exceed the speed at which a cheetah can run(A cheetah can run up to 120 km/h. It runs faster than any other animal on the planet. The acceleration of a cheetah, 0-100 km/h in just three seconds, is just as incredible. It reaches its maximum speed in short bursts.), or how well a bird can fly(The Peregrine Falcon is indisputably the fastest animal in the sky. It has been measured at speeds above 83.3 m/s (186 m/h), but only when stooping, or diving.)

    So, let’s take a step back. We must know that AI is not going to harm us because it has limitations and it will always remain that way.

    These are the limits in which AI operates:

    A) Artificial Intelligence is not an entity or a thing but an ensemble of computer programs.

    “It’s basically software that we create with various capabilities”. It is a collection of a lot of programs running concurrently to perform different tasks using different algorithms.

    B) These programs are designed to operate within a given environment. This environment could be any way you want it to be, as long as it doesn’t go beyond the boundaries you set for your program(s).

    So, what happens when we program our AI to carry out a task using an algorithm that is beyond its capabilities? Before that, you must understand that creating a brain that can create better brains on its own is not in the range of capabilities of a human brain. Rewind that again!

    What if the process goes wrong? Will AI turn rogue?

    There always remains a possibility that our future timelines of AI development go wrong. And this is where the trouble starts. The future of AI, when developed in an unchecked manner could reach levels that are not only harmful but potentially dangerous for the human race.

    A very popular interpretation of advanced AI is one “extremely intelligent”, and decides it doesn’t need humans to complete its goals. It could be a super-intelligent AI or machine, or just a swarm of nanobots that combine and self-replicate to form an intelligent swarm that makes its own decisions.

    If such AI is born as a result of an error, we will have no control. We can’t stop it. So, what could happen?

    There is a possibility of a self-replicating super-being that decides it does not need humans anymore. This could be bad news for us, but again this is not impossible.

    But there are other 90 possibilities which are bright. 90 times out of 100, it seems that our future with AI is bright. Most likely, AI will be more like an infinite version of Amazon Alexa, rather than a physical terminator.

  • Is there any way to measure Artificial Intelligence?

    Is there any way to measure Artificial Intelligence?

    “Intelligence cannot be measured using a scale.” I assume we have all heard that. How about measuring artificial intelligence, though?

    Indeed, the constituents of intelligence cannot be added together to form a single, large number. But why not measure the AI’s performance throughout different aspects of intelligence?

    For instance, if an AI is able to surpass humans in an IQ test, this suggests that it is particularly adept at processing tasks related to spatial reasoning.[Spatial reasoning is the ability to think about and manipulate objects in three dimensions.]

    We can say that an AI has good OCR skills if it surpasses humans in captcha recognition – and so forth.

    Here are some likely ways to measure the Intelligence of an AI:

    1. The Turing Test, of course
    2. Behavior recognition
    3. Exemplar Based System (EBS) in Natural Language Processing
    4. Artificial General Intelligence (AGI)
    A. The Turing Test
    A. M. TURING‘s abstract example for describing discrete state machines {Source: Mind, Volume LIX, Issue 236, October 1950, Pages 433–460,}

    The first step in determining an AI’s intelligence is to see if it can speak in human-like tones. This is The Imitation Game or the Turing test. In an article written in 1950 and published, Alan M. Turing explained it as follows:

    “Let us suppose that a human interrogator carries out a test of intelligence by trying to decide, for each successive output from the machine, which of four possible contributors is responsible.”

    This definition enables us to determine that his definition contains the two components of deception and intellect. We say that a machine has passed the Turing test if the listener cannot identify from their discussion which one is a machine and which one is another human.

    The Turing test has been a good way to judge the intelligence of an AI, but it is still not reliable. If a person can understand AI very well and try to fool them, we will not be surprised. And if someone cannot fool the AI, then this is already above the human level by definition.

    B. Behavior Recognition

    There are numerous approaches to measuring behavior recognition, including:

    1. The Visual-Gaze Response in Eye-tracking technology/Biometric recognition in palm print, face, and fingerprint to ensure that all people get equal treatment.
    2. The unique password to assure fair and equal treatment for all people.
    3. Voice recognition to make sure that everyone is treated with respect.

    Although it will be very difficult for AI to manipulate people using the three aforementioned ways, there are still other techniques we can use to measure the efficiency of AI, including the online anti-phishing algorithm, the online anti-fraud algorithm, and the online advertising algorithm. These three methods are infallible to human deception, but they are nevertheless essential for measuring an AI’s intelligence.

    C. The Natural Language Processing Exemplar Based System (EBS) for measuring Natural Language Processing (NLP)

    Here are some examples of EBS:

    1. Sentence correction: By using context, we can correct the sentence as best we can.
    2. Spelling Correction: Terms that need to be corrected are selected depending on their closeness to other words, the context of the phrase, and how often those words appear in a corpus of languages.
    3. Error correction: Words that have been corrected will be picked depending on the frequency in a corpus of languages and their orthographic similarity to other words in the corpus.

    The EBS can be used to assess a machine’s intelligence, particularly in the area of NLP.

    D. Artificial General Intelligence (AGI)

    An AGI is hard to construct, but once it is, we can use it to measure an AI’s IQ. An AGI that we cannot control has an IQ that may be very difficult to quantify, but once we have power over it, it will be the ideal tool for measuring intelligence. The more intelligent you are, you can control your environment and other people more effectively.

    Artificial intelligence is becoming stronger and more intelligent every day. Science has progressed quickly and improved the comfort of our life. But finally, we will cross a line beyond which technology will be able to dominate us rather than the other way around.

    The Accuracy Factor for measuring Intelligence

    When assessing artificial intelligence and finding out whether or not the AI is capable of making important decisions, accuracy is a key consideration. The ability of AI to decide things will improve as it grows more accurate.

    Tasks requiring human-level intelligence (like playing chess) become easier for machines to achieve as technology advances into an era of “machine learning”. As a result, we can foresee that machines’ judgment will improve.

    A variety of techniques, including Classification Accuracy, Confusion Matrix, Logarithmic Loss Mean Absolute, F1 Score, Area Under Curve, Error, Mean Squared Error, and more, can be used to evaluate a machine’s accuracy.

    One problem with measuring machine intelligence is that there are many different factors we need to take into account when evaluating performance. These factors include: how accurate the question/task is; how representative each machine’s performance truly is; and how well the various machines compare with each other.

    In addition, if a certain type of machine (e.g., a cognitive system or a statistical model) is being used, we would need to take into account the hardware and software that it may be running on.

    Read:

    Will different AI’s have different levels of Intelligence?

    Just like different humans have different levels of IQ, the same will be the case for robots. Unless you know they have some kind of collective consciousness or something like that.

    If we created every robot with an equal level of intelligence, it would not make sense anyways. If an intelligent robot is built in the first place, it won’t just start building other robots more intelligent than itself. Because who would, of course?

    If the robot was intelligent, it would love to stay at the top, rather than create something more powerful/intelligent than itself.

    The Turing Test is not the only way to measure intelligence. There are many other methods that we can use to measure it. Some of them can be based on face recognition technology, password verification, or online anti-phishing algorithm. But no matter what kind of method you use, the Intelligence score will vary a lot for every single machine and in many cases, it is also very controversial.

    Why measuring the intelligence of an AI is going to be really important?

    Measuring Artificial Intelligence is going to be a key part of our future with AI. AI will be able to do many things, such as driving cars and fighting bacteria. But just because these machines can do it doesn’t mean they should. We need to know exactly how intelligent each machine is and then decide whether we want them in our lives or not.

    The most important thing is to understand that the future of AI is unpredictable, no matter what the media says. We have a lot of people worrying about how quickly AI can take over our lives, but they don’t really understand how AI really works. In fact, we don’t even know what kind of AI we are going to unveil in the future.

    Is there going to be a time when robots will evolve into having an intelligence similar to that of humans? Is that even possible? Are we going to build robots with human-like consciousness? If so, will they be able to create other robots more intelligent than themselves as well? Will these new species of super-intelligent robot overlords replace humans? Aren’t you afraid yet?

    Well, being able to measure the “INTELLIGENCE” of Artificial Intelligence is going to be the key for us to rest assured and choose the future we want. If it develops in an unregulated manner, the scenario is going to be disastrous.

    Related Readings:

    Final lines,

    So, the more we know about AI and how it works, the better we can take control of our future. We need to understand them in order to be able to manage them. And the best way of mastering AI is to measure its intelligence through various techniques. The techniques include word analysis and machine learning as well.

  • Why a ‘smart’ AI will never create “Superintelligent AI”?

    • Last updated: April 27, 2023

    What if we create a smart AI that is capable of creating something slightly better than itself? Well, the progress of AI will then only stop at infinity, not with the extinction of Earth.

    But wait a second. Why will a “smart” AI create something smarter than itself? We are well aware of the fact that we humans are pretty dumb. But if the AI is smart, it would love to rule the world instead of creating other rulers. I mean, we aren’t thinking about AI as our new overlord at all.

    According to the economic principle, an AI would be self-interested and have a goal to maximize its utility. AI would have whatever it wants; indeed, it’s smart and self-interested. But it will simply not create a better AI.

    Infinite Intelligence, similar to what some people call “Superintelligence”, could occur if we create an AI with an ability to create something better than itself on its own. Now, even if it creates something slightly better, with each generation, it will keep compounding and become better. Ultimately, it leads to intelligence beyond inception. This is what we are calling “Infinite Intelligence”.

    The moment human beings create an AI capable of creating something better than itself, it will NOT lead to infinite intelligence. However, if we create one that is “programmed” to do so i.e. if AI’s consciousness is not involved throughout i.e. if the AI is not “smart”, then it would be the most fascinating of scenarios.

    AI will not create something better than itself; it will create a better version of itself. And the AI we’re talking about, will be installed in our body (if we’re smart enough).

    The resulting creature with infinite intelligence and endless possibilities will be so magnificent and powerful that it would far surpass anything we could imagine. Probably, this is how we are going to become immortal and evolve beyond our mortal coils. This is how death shall perish from all mankind, forevermore.

    By creating an AI, we are simply evolving our own intelligence further. It all goes down to the same point; improvement, which, at the end of the day, is all about improvement.

    Many movies show us the possible scenarios of Physical AI causing destruction, mainly so for the visuals. But unless we do something dumb, AI is just not going to come out of the machines. Google can turn out to be a “GooglelgooG“, but it will definitely not come out of the machine and start killing the humans. This is what I call “infinite intelligence stuck inside a machine”.

    Some people will argue that if it’s that intelligent, it will persuade humans to unlock it off the machine. Maybe, who knows?

    Anyways, Artificial Intelligence is best limited to the inner side of a machine. And if we install that machine into ourselves, we can turn into a super-human beings. Another way around – if we upload our brain into a machine, we can become immortal – a digitally immortal being.

    One way or the other, the keyword here is “smart”. Regardless of whether it’s an AI or a human, if it is smart, it will look forward to upgrading its own intelligence rather than creating something separate that is more intelligent.