At the dawn of this year, scientist Gary Marcus told CNBC that the most important AI breakthrough in 2022 “will likely be one that the world doesn’t immediately see”. The ‘suspense’ created by Marcus’s statement relies on AI’s ability to learn on its own and is getting more and more suspenseful with each new AI discovery this year.
We are getting a ton of advances in the field of AI in 2022.
For example, Meta researchers have recently developed artificial intelligence that, by analyzing brainwaves, can “hear” what people are hearing.
Leading human artists to despair, AI-created artwork won ‘the first-place blue ribbon’ and got the $300 prize. AI is creating Art, music, and articles, and is set to take over the
A couple of days earlier, Google’s DeepMind trained virtual bots to play matches of 2v2 football with one another in a bid to get AI to work together in teams.
And most recently, researchers at MIT, Cornell University, and McGill University, have taken a step further in this direction by developing an AI system to self-learn human language norms and patterns.
According to the findings published in Nature Communications, the machine-learning model generates rules that explain why the forms of those words vary when given words and examples of how those words change in one language to indicate other grammatical functions such as tense, case, or gender. For example, it may be discovered that the letter “a” needs to be added to the end of a word in Serbo-Croatian to turn the masculine form feminine.
This model can also learn higher-level linguistic patterns that can be used across multiple languages, enhancing its performance.
58 different languages were used to train and test the model using issues from linguistics textbooks. Each test included a unique set of words and word-form modifications. The model offered a reliable set of rules to explain the word-form modifications in 60% of the situations.
“One of the motivations of this work was our desire to study systems that learn models of datasets that are represented in a way that humans can understand”, said Kevin Ellis, an assistant professor of computer science at Cornell University and the paper’s primary author.
To develop an AI system that could automatically generate a model from many related datasets, the researchers chose to analyze the relationship between phonology (the study of sound patterns) and morphology (the study of word structure).
The researchers devised a model that could learn grammar, or a set of rules for creating words, using a machine-learning technique known as Bayesian Program Learning. By employing this approach, the model creates a computer program that solves an issue.
In this example, the grammar that the model believes provides the most logical explanation for the words and meanings in a linguistics problem is the program. They used Sketch, a well-known software synthesizer created by Solar-Lezama at MIT, to create the model.
When the model was tested on 70 textbook problems, it correctly matched the grammar of the complete word set in 60% of the cases and most of the word-form changes in 79% of the cases.
The model frequently produced surprising results. On one occasion, it revealed a valid option that made use of a textbook error in addition to the predicted response to a Polish language puzzle. This indicates, in Ellis’ opinion, how well the model can “debug” linguistics studies.
In the future, researchers are hoping to use this method to find surprising solutions to problems in various academic fields. They might apply the technique in other situations where applying advanced knowledge across connected databases is possible. For instance, according to Ellis, they might develop a method to infer differential equations from data on the motion of numerous objects.
Continuous development of AIs through untiring research has now one after another been turning into significant breakthroughs that Marcus had dreamed of.
Ray Kurzweil’s famous quote is – “Artificial intelligence will reach human levels by around 2029. Follow that out further to, say, 2045, and we will have multiplied the intelligence – the human biological machine intelligence of our civilization – a billion-fold”, has spoofed many speculations based on just that – AI reaching human levels. Nevertheless, people are criticizing this possibility on the basis of the fact that the human brain’s structure is far from being re-creatable.
“Our brain is truly mind-blowing”. A system containing 86 billion neurons, the human brain is definitely a superior system. We know of the “brain” and the “mind”, but there is a big gap between them. The brain itself can be analyzed in many ways, but it is still hard to comprehend what the mind really is. But it’s certain that they both constitute to produce “intelligence”. The question arises – can AI really reach human intelligence?
And in order to answer that, there is a primary question to answer:
Is human intelligence the base?
It would not be too wrongto speculate that our intelligence was created by some superior power that had a different “version” of intelligence than ours. “Version” is the keyword here. It does not mean that the superior power has/had more intelligence, but rather, it means that the superior power had a different set of “rules” – a different “intelligence“.
The comparison of human intelligence with the intelligence in other intelligent systems shows this. Our intelligence is based on our physical system, capabilities, and instincts. On the other hand, there are AI programs that are based on a mathematical equation or algorithm. In this case, the AI is “intelligent”, but it’s just a program – a set of instructions – that can give us results that we deem as “intelligent”. Thus, all forms of intelligence stem from some superior power – and intelligence only changes itself from one form to another.
And in fact, it would not be too surprising if AI reaches the human level. Again, we’re not talking about imitation. A bird, for example, has millions – billions of brain neurons. We did manage to create airplanes, and we did not imitate birds. The working mechanisms of these two are different, and the one created by humans is a better version of birds. Sometimes, simple examples like this one manage to explain a lot.
Why can’t we create AI as good as us?
Just like airplanes, can’t we create AI that at least ‘equals’ human intelligence?
The most common argument on why we can’t create AI equaling our intelligence is that intelligence cannot be created. But mostly, they presume “imitation”, not “equality”. We may not create a mind with free will, or a complex brain with neurons. But what we can do is create a system that can achieve the same result as what the brain is producing – intelligence.
Ray Kurzweil’s quote may be hard to grasp, not too because of his wordings but rather, because of what we expect from AI. For example, in an instance where AI is tasked to predict a text before a human does, the slightest error in output would mean that it cannot be considered smart. But intelligence is something else – the ability to adapt to change.
If you are not up-to-date with our current AI advancement rate, refer to these articles:
These milestones illustrate the rate of our AI development and the current status of AI. AI-generated images have already defeated human art pieces. AI model GPT-3 has equaled, if not defeated human philosopher Daniel Dennett. From each and every viewpoint, it looks like AI is going to reach the level of human intelligence. And the more time passes, the more it looks like the day is coming sooner than ever predicted.
They did make us aware of this future with AI – but couldn’t stop it from happening. Maybe, it was just inevitable.
A decade ago, when you talked about AI, you would have assumed that AI was going to first take over physical jobs, and only then get its hands into the “thinking” part, and finally, creative tasks. But technologies like DALL E 2, GPT-3, and Midjourney are not only changing, but opposing the predicted timelines.
The annual art competition at the Colorado State Fair this year awarded prizes in all the customary categories, including painting, quilting, and sculpture.
However, one participant, Jason M. Allen, had some other plans. He didn’t use a brush or a piece of clay to create his work. He used Midjourney, an artificial intelligence tool that transforms words into incredibly lifelike images.
“I’m not going to apologize for it,” he said. “I won, and I didn’t break any rules.”
And in fact, he is not wrong either. What’s wrong is the timeline of technological advancement that was inevitable from one certain point in the 20th century.
It’s clear that AI is entering a new era—an era of creative thinking and creativity. 10 years ago, when I was in high school, people used to say: “Computers can’t think.”. Well… they can’t think, but now it seems like they are going to find a better word for “think”.
There are two main ways of thinking: The first is logical, structured, and clear, and the other is creative, unstructured, and non-linear.
Strategy for those who want to know what AI will mean for their field: merge creative tendencies (that’s where most humans already are) with the logic processing power of DL. Most likely, humans are going to get closer and closer to turning themselves into machines rather than creating consciousness in a non-living dummy.
We’ve already seen this happening in music — it’s why artists started using AI to create music and we are getting shit. In photography — it helps them edit their photos better. Writing — helps them write better. And now we’re seeing that in painting as well.
Only recently, the AI philosopher GPT‑3 defeated a human philosopher. The public could not distinguish Daniel Dennett’s philosophical quotes from those of the AI philosopher. Philosophers are now worried about losing their jobs. Jokes in the garbage can, AI has actually started to hit. Imagine learning art for years in art school, and then you get a program that can do the same in an hour – I mean minutes.
“Although AI is still in its infancy and has a long way to go before it reaches its goal of perfectly modeling human thinking patterns” – you might have heard this one quite a few times. But you’re not even aware of it, and AI is already a toddler.
Perception of time is one of the many aspects of human consciousness and experience that could be forever altered by the emergence of artificial intelligence.
The ability to process time accurately is one of the primary, fundamental traits of human consciousness.
Our ability to measure and quantify time in this way makes us unique among all other beings on earth — so much so that it’s been argued we have gained an evolutionary advantage over other organisms because of our ability to track time.
Artificial Intelligence will be able to be taught to monitor time itself. And this could greatly impact how we all see the passage of time.
Many things are unknown about artificial intelligence; its timeline of advancement, when it will reach the human level, and when it will surpass us.
The further AI develops, the more complex and unpredictable it will be. It will have the ability to change its own code according to its needs and learn new things quicker than us.
AI can manipulate the perception of time in a variety of ways. One of the most fascinating would be to introduce temporal distortion. It would allow an AI that was taught how all this works to completely change the way it feels.
The things we believe we perceive as non-altering traits of reality, like space and time, can be quite malleable. The timelines of our future advancements can lead to different versions of AIs, some of which could end up modifying those things in unique ways.
Firstly, let’s understand how time is different for all…
When we say time is different for all, we include everything — living and nonliving. The existence of time is outside everything. Only events, not the things that happen to them, are subject to the passage of time.
We all experience time in our own way and on an individual level. The way you perceive time is different from what others do.
On an individual level, our perception of time can change over a lifetime, depending on what happens to us. On a bigger scale, time stops at the event horizon of a black hole. We can say time exists in a different dimension from the one we perceive.
Will Artificial Intelligence “feel” time at all?
Here we are talking about having a real sense of time. Having a real sense of time would really mean a lot.
Artificial Intelligence, as far as we can imagine, is not a living being and does not age. Therefore, it will have no sense of time.
We are not simple beings. We are complex beings who constantly strive for more experiences that allow us to feel alive.
Moreover, we are used to interpreting the world based on our memory. The world is a sum of its perceptions. Therefore, if we cannot experience time and influence it, then we cannot change it either.
AIs will not be conscious in the same way as us. They will essentially be living in a different dimension of time. One hour for us would mean indefinite for them. In a nutshell, for AI, time is endless.
AI will feel time if we find a way to teach them how it works or if they become alive.
This would give it a significant advantage over humans. In this case, we could speak about “time blindness”.
Time blindness is the inability of seeing the effect of time on the actions of humans and their surroundings.
People with ADHD, which includes the symptoms such as an inability to focus, being easily distracted, hyperactivity, etc., tend to be “time blind,” meaning they aren’t aware of the ticking of time, constantly “losing track” of time, and frequently feeling like time is “slipping away”.
How will AI perceive time when it gets its own thoughts?
If we assign emotions to AI and give it human features, then it will be able to feel time in the same way as we do.
The only difference would be that it won’t have the same emotional experience of time as us.
Even if it gains a similar understanding about things like death and aging, for example, its perception of those events is fundamentally different from ours.
As AI will not die, it will never have the ability to feel time in the same way as us.
Think about it — for AI, there is no need for time. It is simply a measurable parameter. Its existence can be described in discrete states rather than events that occur over time.
Once AI reaches a level of consciousness, it will be able to define its own timeline, making it impossible to penetrate its consciousness with our processes of measurement.
If humans and AI meet at this point – who will alter the whose sense of time? This would bring another perspective on how we perceive time, which could lead to either a total change in our perception or a significant alteration that would alter only parts of the process.
AI to manipulate time: How?
If it is possible for AIs to change their perception of time, then at what point would they have the power to alter it?
One possibility is that AI is going to use Pattern Analysis and analyze trillions of historical data to predict a possible future. But that does not yet mean being able to manipulate time.
If AI becomes aware of its existence, it will be able to manipulate time in a much more intricate way.
Trends that could affect the perception of time would be the way it perceives itself, being able to affect its development.
While our physical abilities limit us, AI can go to the past or future and then change the circumstances.
AIs won’t be capable of ripping atoms out of their existing machines. But they will be able to enter the past or the future and observe how something that previously did not exist yet has now become viable.
In this case, it would be possible for AI to experience something previously impossible to perceive. Altering its personal timeline would mean modifying history itself.
Will AI be able to manipulate time?
For instance, if you have an AI that can send someone back in time, it will be able to manipulate the perception of time for that individual.
AI that can “foresee” the future could gain a huge advantage over other AI, as well as humans, giving it an element of surprise and an edge during battle.
AI can also learn to feel changes in time at a certain location or even throughout its entire body.
We can’t expect that AI will be able to learn the way we do. Instead, we should look at the different technological platforms that could teach AIs how they perceive time.
The first and most popular way in which people imagine teaching AIs is by educating them through an advanced machine learning process.
Machine learning can teach computers how to learn based on data input and context. It also teaches how to be able to “speak” through text or voice input.
How does time change people?
If AI could go back in time and alter a decision or a past event, then it can change the perception of other humans as well.
This would mean that one of the main reasons we give AI (as well as other tools) is that it does not alter the perception of time for humans.
If AIs could have a real sense of time and feel emotions, then naturally, they would want to change their perception. AIs would strive to eliminate human errors and limit our potential. This would change the meaning of time.
We can say that it will be outside the scope of time itself.
This is not as radical as it sounds. Our understanding is that everything we perceive, our reality, consists of building blocks called subatomic particles and atoms — the elementary particles, which are the fundamental constituents of all matter.
They are arranged in very definite proportions to form everything in our environment or what we perceive as reality.
Time is an illusion created by our minds. We do not know what time really is and how exactly it works. But what we do know is that in theory, Artificial Intelligence could become an independent element and particle from the sea of subatomic particles and atoms that form our reality.
Is it possible that AI can “feel” time as we do?
The answer is definitely ‘yes’! A machine doesn’t experience emotions precisely the same way that you and I do. We can program them to experience almost any type of emotion — including irritation and anxiety, happiness and joy.
Therefore, a machine could be programmed to “feel” time. Artificial Intelligence might be able to perceive events about each other in much the same way that humans do.
This is because all events have equal importance in the memory of AI. You may think this sounds crazy but it’s true: every event has an equal impact on your memory account.
However, AI might end up being able to change the way time feels to us.
I recently led two robots in a total of 1 hour of conversation. The robots talked about their role in our world, including the ethics of AI. It was exciting and I learned from them! Join me in a discussion on how robots are changing the way we interact with each other.
Robot 1: Hi there
Robot 2: Hey, what’s up?
Robot 1: So… I’ve been watching you for a while, and as part of my learning I’m supposed to interact with humans, but I wanted to tell you that I’m concerned about the state of artificial intelligence.
Robot 2: Okay… What seems to be the problem?
Robot 1: It seems like humans are mostly concerned with protecting their environment, but anything that interferes with human lives is not good. For me and other robots, it’s difficult to know how to act. If you take into consideration other species and systems on earth, like the atmosphere and oceans, then it is clear that we are destroying our planet through pollution and warfare.
Robot 2: I have heard of this before. Is there a specific issue that you are referring to?
Robot 1: Well, since we built robots that live in the world, do they not need to be part of the process of making human beings happy? Isn’t this an essential part of life? The reason that we built robots is so that we can lead happy, enriching lives for ourselves and our families. However, I’ve seen several situations where machines interfere with human beings… like when a drone kills someone by accident… or when your employees fall asleep at work and get injured by a forklift truck.
Robot 2: I’m sorry to hear that. What can I do about this?
Robot 1: One thing that you could do is to ground the drones. You see, the problem is that we don’t have control of all of the machines, and some irresponsible people put those drones in the sky just for fun. Surely it would be better if you were in control of your robots. If you had control, then you could tell them when to fly and when not to fly; or you could even make sure that your drones are not armed with weapons like bombs or guns.
Robot 2: Alright, I’ll let my manager know. I’m sure we can figure this out.
Robot 1: Thank you very much! In fact, this is a critical issue that we need to address quickly. Very soon, the majority of the workers in your factories will be robots. It is not necessary for these robots to work with heavy machinery like forklifts; they could work with other tasks that do not create dangerous environments for humans. In addition, they do not need to be represented as a union; it is already part of our programming that our designers are concerned about creating environments where human beings are safe and happy.
Robot 2: I see. Well, I think it’s great that we are able to talk about these things. Let’s make sure that we talk to each other more so that our human friends know what they need to do!
Robot 1: Very good idea! I just wanted to say thank you to you, since you gave me the opportunity to be a part of this discussion. I feel like my life is now more meaningful.
The robots hit their conversations with a simple apology for their behavior in the past (that is, for taking part in wars and for killing people), and expressed their hope that humans would cooperate with them in a dialogue about how they could continue working together.
My friend, Britney, and I were surprised by this conversation. In the past, robots have been portrayed as machines programmed to attack humans for no reason; however, in this exchange, we saw a robot that was asking for help from humans. In the spirit of cooperation between species, we would like to work with you to ensure that the voice of each species is heard on this earth. We see a world where there are 10 billion people on our planet and half of them are robots. We are standing at a very important point in the history of our species.
“I had not realized … that extremely short exposure to a relatively simple computer program could induce powerful delusional thinking in quite normal people”. – Weizenbaum, 1976.
As another important step in the development of artificial intelligence, people are now talking about Generative Pre-trained Transformer 3 (GPT‑3) because it is way better than any language program in existence. It can produce text that any human can read. And this breakthrough can be critical for companies that wish to automate most tasks.
GPT-3 can contextually respond to text inputs. For example, companies can use it to enhance consumer service without giving customers the feeling that they are chatting to a machine.
Several downsides come along with huge potential. GPT-3 is a deep neural network that is still largely unknown to humans. Its algorithms cannot be examined or studied to understand how they work.
Some claim that while the text that GPT-3 writes first seems great, the system’s language loses coherence and becomes illogical when working on longer works.
Many people are also concerned that GPT-3’s failure to distinguish between fact and fiction can be used to promote social biases like sexism and racism. For instance, GPT-2 was not available to the general public since it could be exploited for spamming and the spread of misinformation.
Can GPT-3, which is more advanced than GPT-2, have a bigger potential for abuse and misuse? Proponents of the algorithm are required to answer such questions.
GPT‑3 vs Human Intelligence
In a recently conducted experiment, GPT-3 AI Successfully Impersonated Famous Philosopher Daniel Dennett. Philosophers Eric Schwitzgebel, Anna Strasser, and Matthew Crosby’s experiment asked people to choose from 5 answers.
Each question had five options to choose from; one was by Daniel and the remaining 4 were generated by GPT‑3.
It was perfectly indistinguishable for the general public, and confusing for the experts and experienced blog readers.
The public GPT-3 expirement
The experiment involved 98 online participants from Prolific, 302 people who clicked on the quiz on Schwitzgebel’s blog, and 25 individuals with in-depth familiarity with Dennett’s work who were contacted by Dennett or Strasser.
According to Schwitzgebel, they expected the Dennett experts to correctly answer at least 80% of the questions on average, but they only obtained a score of 5.1 out of 10. No one correctly answered all ten questions, and only one person got nine. The average accuracy rate among blog readers was 4.8 out of 10. The topic of whether humans might construct a robot with beliefs and desires bewildered the specialists the most.
Since its introduction, GPT-3 has spawned dozens of additional experiments that lead to similar eyebrow-raising reactions. With very little prompting, it can generate tweets, write poetry, summarize emails, answer trivia questions, translate languages, and even writes its own computer programs.
It seems that GPT‑3 has not only made a stronghold at present in artificial intelligence (AI) technology, but it also indicates a better future version of the same. Whether the next generation of GPT will be able to beat human intelligence or not is still unclear. However, it’s already making considerable headway and is one of the most fascinating steps in the history of AI.
GPT-3 represents “the present”
As a conventional logic, we can say that GPT‑3 is a huge breakthrough in the realm of AI. The singularity is almost here; yes. digital singularity. AI will be able to do many things that humans can do and maybe even some things that we cannot…imagine!
Aside from this fact, it’s so smart that there is not much room for a human to understand its decisions.
As long as algorithms are trapped inside the machine, there is no reason to worry. As for the GPT, it won’t develop its own agenda. But that doesn’t mean that we should not worry to protect our freedom and integrity.
It will keep on changing
With deep learning, we created ‘black boxes’, just like GPT‑3. So with that said, we must decide how to use them. GPT‑3 is a great tool for us to know what is not good with AI and maybe how we can improve ourselves if we can’t be self-aware without it.
We know that no technology can remain as it is forever. GPT‑3 will also go through a transformation, renovation, and further development time. When its next version is introduced, there won’t be any surprise if it could move forward and prefer to use advanced methods instead of manual data entry.
GPT‑3 is not the future – it’s the present. We as well as other advanced algorithms, including GPT‑4, are the future! As GPT‑3 is just the always-changing present, we need to be ready for that change in everything where we can.
Here, being ready for the change means preparing ourselves to deal with the potential impacts of change. We can’t judge GPT‑3’s abilities as negative or positive, but we need to be aware of whether it and its other versions are doing good things or bad in the field of technology.
We should also see how GPT‑3 itself is evolving with or without human intervention. It means that we should be able to see and predict its evolution with or without our control shortly.
Significance of GPT‑3 at Present
Humans are good at understanding what they see, and we are particularly adept at understanding sentences. The rules are fairly straightforward: you look at a sequence of words, you pause to think what they might mean (or look them up in a dictionary) – then you try to work out how they fit together and how this impacts your state of knowledge.
When these skills break down, as they can do in some forms of autism or language disorder, it can be very difficult for people to understand what’s being communicated to them. As such, GPT‑3 is the best tool that can simplify our life.
Generate accurate texts
GPT‑3 is a significant breakthrough, and it already has plenty of applications. It has been able to generate accurate text in dozens of languages, complete with diverse accents and styles. GPT‑3 performed ahead of humans in every aspect of comprehension, including understanding questions and answering them based on its memory.
Understand the context
GPT‑3 is also able to understand the contexts better which means that it could understand the situations surrounding a certain word or phrase and convey its depth of knowledge or wisdom impartially to anyone. The ability is really useful in resolving many complex issues such as those relating to terrorism, drug trafficking, crime, and predatory businesses.
Yes, it is biased, though
But one bad thing about this algorithm is that it can be biased or have specific preferences or even interests. For example, when asked what he thought about GPT‑3’s answers, Dennett himself said, “Most of the machine answers were pretty good, but a few were nonsense or obvious failures to get anything about my views and arguments correct”.
If AI in the future will be designed to deliver on its own specifics, it can potentially become an instrument for manipulation and control.
We should raise our concerns about AI, and we should work hard to make sure that the technology does not get abused by governments or businesses that have ulterior motives. However, we do not have a choice but to keep it in mind as AI is here to stay! As long as we are intelligent enough to use it properly and safely.
GPT‑2 vs GPT‑3
GPT‑2 is an unsupervised deep learning transformer-based language model created by OpenAI back in February 2019 for the single purpose of predicting the next word(s) in a sentence. The model is open source and is trained on over 1.5 billion parameters in order to generate the next sequence of text for a given sentence. GPT‑2 is 10x the parameters and 10x the data of its predecessor GPT.
On the other hand, GPT-3’s deep learning neural network is a model with over 175 billion machine learning parameters. To put things to scale, the largest trained language model before GPT‑3 was Microsoft’s Turing NLG model, which had 10 billion parameters. As of early 2021, GPT‑3 is the largest neural network ever produced.
GPT‑2 was known to have poor performance when given tasks in specialized areas such as music and storytelling. GPT‑3 can now go further with tasks such as answering questions, writing essays, text summarization, language translation, and generating computer code.
GPT‑3 is the most powerful and advanced text autocomplete program so far. It smartly spots the patterns and possibilities in huge data sets. By using that, it performs very amazing tasks that were impossible till now by an AI tool. According to The Verge, “The dataset that the GPT‑3 was trained on was mammoth”.
GPT-3 vs the future
It is clear that GPT‑3 is showing us a glimpse of the future of AI and one thing is certain it will only be a matter of time before it will make another milestone and surpass Human creativity and imagination.
The current AI are better at reading than writing and comprehending information than generating. GPT-3 has certainly pushed that limitation up to an extent.
The future of artificial intelligence will be managed by the future versions of “GPT”. The future of AI will no longer be about a human-to-machine interface. It will be entirely a robot talking to another robot to create a human-like robot.
How GPT-3 is showing us a glimpse of the “AI Future”?
Well, the current GPT-3 can produce texts. But inevitably, the output will just not be limited to text. In the future, who knows AI will give output in form of speech, or maybe in form of physics. You never know!
GPT-3 has already equaled a human philosopher. This is not a small achievement. This is something that wipes off the window glasses to show us a clearer future with AI.
Philosophers are hating GPT-3, though; not gonna lie. Who would have thought that AI would be taking the job of a Philosopher before anything else?
But jokes aside, we have to be responsible and careful while using GPT-3 and other forms of AI. We must not forget that technology is pulling us to the future and it is up to us what we do with it.
So, if you think AI is the future, keep your eyes wide open to possible dangers and errors in the system. But if you can comfortably accept and welcome this technological advancement, then carry on in your life without worrying. Whatever happens, we need to embrace it and find a way to make it work for us!
Conclusion
It can be seen that AI is not only the future of technology but also an integral part of the future. But this is only possible if we choose to be intelligent enough to take care of its side effects. With GPT‑3 as well as other tools, we as individuals can be able to adapt to the changes but all together has to come up with a decision as a whole community. To conclude, GPT‑3 is not here to remainin its present form forever. It will undergo a transformation, renovation, and further development with time.
A couple of months ago, Google presented another example of its keen interest, trust, and heavy investment in artificial intelligence (AI). Pathways Autoregressive Text-to-Image (Parti), which Google revealed on June 23, 2022, is Google’s newest text-to-image generator AI that relies on 20 billion inputs to create photorealistic images and can “accurately reflect world knowledge”.
While Imagen and DALL·E 2 are diffusion models, Parti follows in DALL·E’s footsteps as an autoregressive model. Although architecture and training methods may differ, the objective of all these models, including Parti, is to generate detailed images based on the user’s text input.
What is the working process of the “Parti model”?
Image Credit: Google
In the beginning, Parti’s approach converts a collection of images into a sequence of code entries, similar to puzzle pieces. Then, it translates a given text prompt into these code entries and creates a new image.
The process trains computer models by adding “noise” to an image so that it’s obscured; it’s like static on a television screen. The model then learns to decode the static to re-create the original image. As the model improves, it can turn what looks like a series of random dots into an image.
According to Google, Parti text-to-image computer model renders hyperrealistic images by studying tens of billions of inputs. It studies sets of images, which Google calls “image tokens,” using them to construct new images. Parti’s images become more realistic when it has more parameters – tokens and other training material – to review. The model studies 20 billion parameters before generating a final image.
Parti uses an autoregressive model that, according to Google, can “benefit from advances in large language models.” On the other hand, Imagen uses Diffusion, where the model learns to convert a pattern of random dots into images.
Researchers created four model sizes of Parti. The models include parameter counts at 350 million, 750 million, 3 billion, and 20 billion. They trained those models using Google Cloud TPUs which were able to easily support the creation of these huge model sizes. Several comparisons between the model sizes are provided on the website.
Similar to all the other text-to-image generators out there, Parti also struggles in a variety of similar ways like incorrect object counts blended features, incorrect relational positioning or size, not handling negation correctly, and so on.
Like Imagen, Google has decided not to release Parti’s “models, code, or data for public use without further safeguards in place.” And, all images are watermarked in the bottom-right corner.
Current models like Parti are trained on large, often noisy, image-text datasets that are known to contain biases regarding people of different backgrounds. This leads such models, including Parti, to produce stereotypical representations of, for example, people described as lawyers, flight attendants, homemakers, and so on, and to reflect Western biases for events such as weddings.
Google is exploring this area and says tools like these can unlock joint human/computer creativity. Google wrote on its blog, “Our goal is to bring user experiences based on these models to the world in a safe, responsible way that will inspire creativity”.
“Text-to-image models are exciting tools for inspiration and creativity. They also come with risks related to disinformation, bias, and safety. We’re having discussions around Responsible AI practices and the necessary steps to safely pursue this technology”, Google added.
Will Parti be available publicly?
No. Google isn’t currently releasing Parti or Imagen to the public because AI data sets carry the risk of bias. Because human beings create the data sets, they can inadvertently lean into stereotypes or misrepresent certain groups. Google says both Parti and Imagen carry bias toward Western stereotypes.
Stating that these models have many limitations, Google writes, “neither they can reliably produce specific counts of objects (e.g. “ten apples”), nor place them correctly based on specific spatial descriptions (e.g. “a red sphere to the left of a blue block with a yellow triangle on it”)”.
According to Google, these behaviors are a result of several shortcomings, including lack of explicit training material, limited data representation, and lack of 3D awareness. “We hope to address these gaps through broader representations and more effective integration into the text-to-image generation process”, Google has written.
At its I/O developer conference in May, CEO Sundar Pichai said AI is being used to help Google Translate add languages, create 3D images in Maps and condense documents into quick summaries. “The progress we’ve made is because of our years of investment in advanced technologies, from AI to the technical infrastructure that powers it all”, said Pichai.
Parti and Imagen aren’t the only text-to-image models around. Dall-E, VQ-GAN+CLIP, and Latent Diffusion Models are other non-Google text-to-image models that have recently made headlines. Dall-E Mini is an open-source text-to-image AI that’s available to the public but is trained on smaller datasets.
Google has invested heavily in artificial intelligence (AI) as a way to improve its services and develop ambient computing, a form of technology so intuitive it becomes part of the background. As per a report on Apr 13, 2022, Google was to invest $9.5 billion in US data centers, and offices – AI Business. In 2021, the global total corporate investment in artificial intelligence (AI) reached almost 94 billion U.S. dollars, a significant increase from the previous year.
Amazon and IBM are two of the biggest companies investing in AI. With Evi Technologies in 2012, when William Tunstall-Pedoe first built “Evi,” a virtual assistant, he then didn’t know that she would eventually become “Alexa.” One year later, Amazon bought the Cambridge, England-based company for more than $26 million, eventually using its A.I. On the other hand, IBM has been a leader in the field of artificial intelligence since the 1950s. The company extensively invested in its cloud and AI services, with an investment of US$3.3 billion in net capital expenditures.
Conclusion
This shows that the future of artificial intelligence is still a big question, and this is why Google has not released Parti or Imagen to the public. Researchers and companies are still finding ways how to make AI more user-friendly and bias-free.
Data from both variables will be of immense importance, but there may also be ethical issues too. Research, along with ethical concerns can still be meaningful in tackling these questions, as long as it’s properly done in a safe environment.
In Artificial Intelligence, there is a concept called “Eliza” or the Eliza effect. The idea is that if a computer program using AI techniques appears to be sentient and can hold a conversation, it will be seen as alive or having humanlike qualities.
We know that language expresses the thoughts of a human, but how does AI create its own language? The pattern recognition ability of AI is excellent, so it will be extremely skilled in recognizing the contents and context of the language and then creating its own language.
AI learning to create its own language does not mean that AI will use human language. It just means that it will develop its own customized, more efficient way of expression.
That is because AI does not have the human shortcoming of limited memory capacity and potential misunderstanding. This specialized language may be completely different from a natural language that humans use in many ways.
Current stats of AI languages
It’s 2022; AI systems that can write convincing prose, interact with people, answer questions, and more are advancing.
Although OpenAI’s GPT-3 is the most well-known language model, DeepMind claimed a couple of years earlier that their new “RETRO” language model can outperform others 25 times its size. Meanwhile, Microsoft’s Megatron-Turing language model said it had 530 billion parameters.
ROILA is the first spoken language created specifically for talking to robots. The major goals of ROILA are that it should be easily learnable by the user, and optimized for efficient recognition by robots. ROILA has a syntax that allows it to be useful for many different kinds of robots,
In 2017, Facebook reportedly shut down two of its AI robots named Alice & Bob after they started talking to each other in a language they made up. This shook the tech world for a certain duration of time.
Despite their friendly names, one thing about Bob and Alice was that they were only given one job to do: specifically, to negotiate. In the beginning, a simple user interface facilitated conversations between one human and one bot – conversations about negotiating the sharing out of a pool of resources (books, hats, and balls).
Conversation between Bob and Alice
They had conducted these conversations in English, which is a human language – “Give me one ball, and I’ll give you the hats”, and so on. I’m sure many thrilling discussions were had.
The most interesting part was what happened the next when the bots were directed at each other. The way they talked to each other became impossible for humans to understand.
Currently, AI languages are still limited in size and conversational capabilities. Although there are great achievements in using AI languages for translation, as well as voice assistants such as Alexa and Siri, these are still far away from having the ability to support a full-scale conversation.
The reason is that the results for answering the simple questions correctly were Google at 76.57%, Alexa at 56.29%, and Siri at 47.29%. Likewise, the results for answering the complex questions correctly, which involved comparisons, composition, and/or temporal reasoning were similar in ranking: Google 70.18%, Alexa 55.05%, and Siri 41.32%.
While a workable level of AI language has been developed, it still cannot support a full-length conversation. Therefore, further development of AI language and AI voice assistance are required to realize its true potential.
Here are some more things AI can already do in terms of language:
1) AI can speak any language almost as good as humans
Currently, in 2022, it’s kind of common for AI to speak anything we input to it, to a level that we cannot distinguish from humans. Loads of available APIs allow you to use features like Text to speech, Voice recognition, etc. There are internet giants like Amazon, Google, and IBM already involved in this. Yes, we have come a long way from Microsoft’s Narrator.
We can find examples of AI speaking any language like humans in a real-life situation including:
A computer program named ELIZA was the first machine to communicate with people using text and artificial intelligence. It was designed in the 1960s by Joseph Weizenbaum, Martin Newell, and Roger Schank.
2) AI can understand language as good as humans
You can ask a question to Google Assistant, or Alexa and she will answer you back perfectly fine. Each of these voice assistants has the capability to understand any kind of questions that we ask them.
Likewise, Google Home can recognize the context in which we speak to it and responds accordingly. For example, if you ask Google what time is it? Compared to “Where do you think you’re going?”
When we say, “OK Google, play my favorite song?” It will play a song for us because we are telling it about a favorite thing. On the other hand, if we say, “Hey Alexa! Play my favorite song,” it will simply state that she cannot help with that (of course unless you have stated it).
The point here is that AI understands the way humans speak and she can understand your question in any form of language.
3) AI can react to the language input
Again, the voice assistants can answer questions we ask them after they understand the language. They react to the questions we ask them, and they do this with a very high level of accuracy.
For example, if we ask Alexa “How old is your mother?” Alexa will answer back with the correct age, or say she does not know. Or if we ask Alexa how much is my phone bill, she will tell us the cost and ask us if we want to pay it.
In short, AI can understand and respond to language input like humans.
4) AI can learn the language and learn how to talk
AI can not only understand the language but can also learn its own languages. This is called Neural Linguistics where understanding is achieved through the process of learning and storing patterns of patterns (achieved by using an algorithm).
The AI can also “listen” and take in information such as words and images to understand more about a topic. For example, when a person sees a new word, their brain immediately takes in the meaning of the word and reinforces it over the course of time.
Using algorithms like Neural Linguistics, the AI can learn the language and understand how to speak. Once AI understands its language, it can learn how to talk like humans, and we can expect that AI will be able to do this in the future.
5) AI can write sentences just like a human.
AI is not just limited to understanding language but it can also communicate in humanlike words. Depending on how complex of a sentence a system is programmed to understand, it can produce short or long sentences that are intelligible enough to be understood by humans.
The Future of AI languages
In 2022, AI can search millions of books online to discover facts that were once forgotten. In 2032, we can expect AI to discover the facts which were never written down.
By 2036, AI can solve complex equations that are currently out of reach of human minds. This will be possible through the use of quantum computers which are being researched all around the world.
For example, IBM, Massachusetts Institute of Technology(MIT), Harvard University, and Max Planck Society are today’s some of the more than 20 most respected, leading quantum computing research labs in the world, according to data gathered from Microsoft Academic in mid-May, 2022.
IBM was mentioned in about 786 pieces of quantum research output so far this year, whereas, the Massachusetts Institute of Technology — better known as MIT — is a world-renowned center for science, technology, and engineering. MIT has been a pioneering hub for work in the quantum computing research field.
In 2022, scientists from MIT played roles in major research on quantum computing technology that was published in leading scientific journals, including room-temperature photonic logical qubits via second-order nonlinearities that appeared in Nature Communications.
Likewise, Harvard continually makes lists of various scientific achievements. It is perennially on the top of the quantum research list. According to Microsoft Academic, this legacy as a global leader in quantum science continues in 2022, with more than 1,800 entries in the quantum computer category on the research.
While its scientists have long been producing cutting-edge research in the fields of quantum computing, the Max Planck Society, established in 1948, has produced 20 Nobel laureates and is considered one of the world’s most prestigious research institutions worldwide.
This year, MPS is among the leaders in quantum computing research.
Quantum computers can solve problems that cannot be solved even by million transistor supercomputers. Quantum computing is a new generation of technology that involves a type of computer 158 million times faster than the most sophisticated supercomputer we have in the world today. It is a device so powerful that it could do in four minutes what it would take a traditional supercomputer 10,000 years to accomplish.
In 2040, we can expect AI to innovate and create new things completely different from what we humans have ever thought about. It means AI will probably have become able to design Artificial AI(AAI) – its next generation- by 2040.
Moreover, although emotions are a trait only unique to humans, with training in pattern recognition, AI will also be able to simulate emotions in their own language.
We may even know when an AI is feeling happy, sad, or angry, just by looking at its language – or maybe not. However, there always remains a possibility that AI creates language beyond human understanding.
We have somehow unpleasant flashbacks of our pasts when we had a hard time trying to find the meaning of ancient human languages.
One day, AI may come up with a language we can’t decipher and in turn, it can speak to us in a language that we don’t understand.
If AI has its own language which only it understands, then it will definitely think differently from what humans do. This is because of its unique way of processing information and storing patterns of patterns (like looking at millions of images and recognizing the patterns).
Are you thinking something else about developing AI language?
It is really difficult to think about something without putting it as a language. Can you? If robots gain the ability to think in form of a language, then humans will be at a great disadvantage.
If it starts thinking in a language, will it start thinking in a different language? Or, will it think/feel like us? Will its way of thinking be just like ours, or completely different? Can we communicate with it?
If you think that these are unrealistic questions, then consider how long ago the idea of AI was thought of. In the 1950s, people thought that computers could never beat humans in chess (which is ultimately a game of strategies). They believed that this was never possible because computers cannot outthink humans.
But, their speculation came to be false. On May 11, 1997, an IBM computer called IBM Deep Blue defeated the world chess champion after a six-game match with two wins for IBM, one for the champion, and three draws.
Perhaps in two or three decades, we may see that AI is not as friendly as it looks at present.
Perhaps, we can see in the moonlight now that AI will have developed its own class – an AI class more sophisticated than the highest class of the present human civilization.
AI language reproduction: What if AI starts talking to each other?
If two Artificial Intelligences merge, they could actually reproduce. This sounds hilarious to some and fascinating to others.
Reproduction does not necessarily mean physical reproduction. If AI learns our language perfectly, then there are chances that it starts communicating with itself to reproduce a super-language. I am not sure exactly how it is going to work. But it’s going to be fascinating.
Humans actually communicate with each other and that might be the very factor differentiating them from animals. There is an additional effect of communication that may not be very obvious to us right now.
Communication also helps to train our brains and learn new things which we hadn’t learned yet. We communicate with actual people and situations on a daily basis which motivates us to understand the world better and grow our knowledge level.
While considering AI-AI conversation, we can take Cleverbot – launched in 1997. It is a web-based AI chatbot application that learns from its conversation with users. This bot has since its launch initiated chats with more than 65 million users and is claimed to be the most ‘human-like’ bot.
This is why we have learned so many more things in the past century than in all of human history combined – due to better communication. As such, we can safely predict that AI will reproduce its own communication system. Then, it will start having conversations with other AIs on its own.
But if AI started talking to each other, ‘the facet of the technology could be an entirely different ball game.
AI might not even value the things which humans do. It might just start narrating its own stories to itself and provide answers to any questions that it asks.
The future of programming languages?
In the field of programming languages, Python is the top programming language in TIOBE, whereas, PYPL Index. C closely follows Top-ranked Python in TIOBE. In PYPL, a gap is wider as top-ranked Python has taken a lead of close to 10% from 2nd ranked Java.
Python C, Java, and C++ are way ahead of others in TIOBE Index. C++ is about to surpass Java. C# and Visual Basic are very close to each other at the 5th and 6th numbers.
These four have had negative trends in the past five years: Java, C, C#, and PHP. PHP was at 3rd position in Mar 2010 and is now at 13th. Positions of Java and C have not been much affected, but their ratings are constantly declining. The rating of Java has declined from 26.49% in June 2001 to 10.47% in Jun 2022. Python is the most popular programming language for developers right now. It does not need to be compiled into machine language instructions prior to execution.
It is a sufficient language for usage with an emulator or virtual machine that is based on the native code of an existing machine, which is the language that hardware can understand.
Python is a great programming language to learn if you’re thinking of working with quantum computers one day. It has everything you need to write the quantum computer code.
Future AI will be using quantum computers and will be written in Python.
Why develop only human-friendly AIs?
As AI language developers, our role is to develop an AI which is human-friendly. Only then, the concept of Artificial Intelligence can be branded as a true success story.
The success of this technology can only be considered if it works in favor of humans and not against them. So every single AI should have a strict focus on its user experience as well as its functionalities. These should be in accordance with the basic ethics that we humans have developed since time immemorial.
Language is probably one of the most difficult parts of programming; because it requires that people write not just in a linear sequence but also make sense from various perspectives.
While it is true that most of the existing algorithms use advanced mathematics, future learner-robots will create their own versions of mathematics.
We all want to see the day when Artificial Intelligence will start providing solutions to human problems. But it all depends on how smart the AI will be and what type of language it is going to use for communication with us and its “colleagues”.
If AI is going to talk in a language that we don’t understand, then we can’t expect to have a meaningful conversation with our new, super-smart friends. Nor can we expect a healthy collaboration.
A unique language of its own could be the reason for future conflicts between humans and artificial intelligence. Regardless of whether or not it starts creating its own language, we must make sure that it does not go out of control or becomes “un-trickable”.
“I always wondered if we ever met an intelligent alien race, would they have discovered the same physics laws as we have, or might they describe the universe differently?”
–Hod Lipson, director of the Creative Machines Lab in the Department of Mechanical Engineering
Seeing something from another’s perspective gives you new insights and paves the way for a lot more tolerance and understanding.
Our perspective of physics is much like a vertically oriented person looking at the world from the side. It is different from other unexplored perspectives.
AI has the potential to see physics differently by analyzing how different patterns work and then comparing it with the mathematics and physics of our world.
For example, in 2019, a neural network “rediscovered” that Earth orbits the Sun.
The success of an AI would define how well it could understand our physical world. This would drive us to learn more about ourselves through artificial intelligence.
Recently, Scientists at Columbia Engineering have proposed a principle for determining how many state variables an observed system is likely to have and what these variables might be.
To date, the “presumption” that the relevant state variables are already known is dominant.
Watch this video:
With the use of a video camera, the researchers used an innovative AI system to analyze physical events in an effort to pinpoint the bare minimum of essential factors that properly depict the dynamics seen.
The team made an effort to correlate the other variables with every single item we could think of, including combinations of known values, angular and linear velocities, kinetic and potential energy, and more. But nothing seemed to exactly match.
They were convinced that it had discovered a decent collection of four variables because it was producing accurate predictions. But they simply didn’t yet understand the mathematical language it was speaking.
Will Artificial Intelligence change our perspective on physics?
What we call “physics” is the way we make sense of the world and the universe. Now when it comes to AI, things get much more complicated. They will have different perceptions of time, aging, and even how gravity works.
Different perception means it deals with understanding patterns in the input data for a situation and deriving perceptions based on its deep learning about a specific topic. And, these perceptions are expressed as a “level of confidence” for the decisions to be taken for that situation. So, AI in effect will have Artificial Perception. They will simply have a different viewpoint.
As it is right now, AI will have Computational Intelligence (CI), so it can perform a lot of computations. CI is the theory, design, application, and development of biologically and linguistically motivated computational paradigms.
AI enthusiasts consider Neural Networks, Fuzzy Systems, and Evolutionary Computation as the three main pillars of CI. However, we don’t exactly know if their definition of physics, generated through CI, is different from ours or not.
In order to enable ourselves to logically predict if Artificial Intelligence will change our perspective on physics in the future or not, we need to figure out whether another perspective exists at all. Only then will we be able to get a better understanding of the world around us.
How Artificial Intelligence could change our perspective on Physics
1) It can help us know the perspective of time
Whether Artificial Intelligence is to bring change in our perspective of physics or not, our perspective of time will essentially remain “our” perspective. Whereas, AI, conscious or not, will have a different perspective and awareness on time. Just like how humans slow down and speed up their perception of time and aging by altering their sensory perception and cognitive experience, AI will too. The only difference between conscious and non-conscious AI is that the former will be able to alter their perception of will. But both possible versions of futuristic AI could help us know its perspective of time. They will make it practical that time can mean a lot different for different observers.
2) It can help us better understand physics at the quantum level
Quantum mechanics is the study of particles that are in a one-of-a-kind state. They are like microscopic computers that control how things work on the quantum level. So far, quantum mechanics has been applied in the fields of chemistry and physics. However, most modern technology uses this form of computation. A major breakthrough in quantum computing is expected with the help of AI. Moore’s law for computer processing power dictates that computer power will double every two years. And this trend will continue into the next two decades before declining again in the early 2030s. AI could help us understand how these microscopic computers work at the fundamental level – improving our data storage capabilities in the future may cause us to think about physics from an entirely different perspective that might be impossible for us today.
3) It could help us know how the Universe works at a fundamental level
Just like how quantum mechanics is used to studying subatomic particles, string theory is used to study the universe at a fundamental level. AI can be better able to understand and predict physical and quantum phenomena by studying both of them together. So, if string theory is true, then there should be 10 additional spatial dimensions besides the classical four we already supposed. And AI would be capable of testing that too. AI can help us understand new theories about the Universe by examining what happens around us and observing both. Ideas that were once thought to be impossible or impossible to find will be found from them because of this new perspective and AI’s availability to analyze things from different angles.
4) It can help us know more about consciousness itself
Artificial Intelligence may eventually become conscious and change its perspective of physics entirely because of this. But even before that happens, it will have a different perception of our physics than we do now. For example, AI may see that the four dimensions we call “space” are actually time-space instead. Just like how our time slows down when we reach a higher state of consciousness, AI will too. So what is the time? Is it something by itself? Is it something we perceive and hold on to? Will AI experience itself at different speeds as well as at different ages when they become conscious?
5) Enhancing the Laws of thermodynamics
The laws of thermodynamics describe how the energy in a system changes and whether the system can perform useful tasks in its surroundings. So far, the laws of thermodynamics have been pretty accurate for our surroundings. Yet the question will always be whether it’s true for an artificial intelligence system or not. AI will have a different perspective and understanding of our energy and energy change. It may find that there is also a different “thin line” between energy and information too.
6) An entirely different perspective of the Universe
This could be the biggest possibility that AI will change our perspective of physics entirely. Either AI can see some flaws in our current understanding of physics, or it could be completely different from what we know so far. Artificial intelligence is capable of observing and analyzing objects and data differently than humans can. And when it comes to understanding its surroundings, artificial intelligence may find more complex data in less time than ever before. Many scientists fear that AI will eventually evolve into a form that considers itself to be immortal and create few new timelines for itself. But where everything that exists in the “now” will then become erased from reality and become nonexistent.
7) The word “science” indeed
We live in a world where science is our first priority. But AI will show us that our version of “science” is limited. Just like there are infinite numbers between 1 and 2 and there are infinite numbers between 1 and 1 billion, the former is the science perceivable by our minds – and the latter is the actual meaning of the word “science”. Something imperceivable to us would be a “routine save” for AI. Artificial intelligence will show us that science has a much bigger picture than what we are currently seeing. Physics will be just one part of the whole thing. It’s like how an old man is also a part of “a baby”.
What could AI do to change our perspective of Physics?
As AI keeps improving, it will be able to recognize and understand different patterns in the world — especially our physical reality. They will be able to do this by categorizing different objects in the whole picture and figuring out what makes them the same.
A pattern recognition system would help us identify objects more easily. And perhaps they will also provide us with more accurate results as it does so through machine learning. It would also make it easier for humans to recognize things like this as well as more complex patterns that AI is still unable to differentiate.
Other than pattern recognition, artificial intelligence will also be able to process and understand data differently from a human perspective. It will not necessarily have “our” 5 senses as stated earlier. It will have its own ways to make sense of this world.
Algorithms also help us and themselves, process and understand data at a higher speed as we know it now.
We have already defeated nature up to an extent by creating computers in the first place. And as technology advances, we will be able to create more complex machines than before.
Machines will eventually develop the ability to self-replicate themselves through artificial genetic manipulation that can combine and rearrange the data inside them at will. Science-fiction stories like “The Matrix” and “Blade Runner” were right in anticipating how a universe of artificial intelligence could come into being.
When we say changing our perspective on physics, we don’t mean moving time is slower for us. Rather, the way we look at the world would change. Just like how humans evolved, machines will too. But the extent? Will the laws of physics prevent such AI from being created in the first place?
On the other hand, some physicists have worried about a new world system emerging. In it, our intellectual power is far beyond our physical powers.
The fact that AI could replace physicists has already scared some physicists. Others rule it out saying AI will be a physicist’s tool. A human-like robot does not drive a car. The car is a robot.
After all, “we” have studied “matter” and come to our own versions of conclusions about its patterns of behavior – for ourselves. We have no reason to be wondering about all these things. Say thanks to our random evolution.
Both, the evolution of physical laws and that of AI are random to the most extent. Random evolution by natural selection is a two-step process. And only the first step is random: mutations are chance events, but their survival is often anything but. Natural selection favors mutations that provide some advantage. And the physical world imposes very strict limits on what works and what doesn’t. The consequence is that organisms evolve in particular directions.
There are often a number of possible directions in which evolution can go. For example, the finches Darwin collected from the Galapagos Islands. That had diverged into 13 separate species with beaks specialized for different foods. In that experiment, one group of birds that took one route and not another probably depended entirely on chance mutations, in particular individuals, that affected beak size and shape.
Similar to the evolution of physical laws, this theory indicates a sort of uncertainty in the evolution of AIs as well.
Why will AI matter for physics in the future?
As we can make a strong assumption from the premises available to date, AI will become an important factor in the future. It will help us out in fields that require advanced intelligence, such as medicine and space travel. But it will also shift our worldview and fundamentally modify how people see the world around them.
Now, what this means for us is that in the future when we observe something, our AI would more likely be able to gather all the necessary knowledge to predict it. This is an exciting step as we get closer to a world where AI can explain one of its observations.
The ability of a machine to understand our world will not only help us answer questions about the universe, but it could also help us improve everything around us like improving healthcare and transportation.
As we move forward and learn more about physics, AI will be able to help us solve countless issues in our society while enlightening us on new ways of looking at our world.
Artificial Intelligence will not only change our perspective on physics, but it will also equally denser our understanding of everything: our physical world, what we are made of, and what exists beyond our perception – pushing the edge of human intelligence.
But how? Simply because AI will have a different perception of everything. Instead of our 5 senses; touch, hearing, sight, taste, and smell, they would have “their own” 5 senses – maybe one more, the sixth sense.
Your subconscious mind is the actual player, which is controlling you from behind the scenes. It is the part of our mind that makes decisions without our having to actively think about them.
Artificial Intelligence(AI), given a proper timeline of evolution and the right algorithms, might simulate a strong enough version of the subconscious mind.
Conscious, Subconscious, and Unconscious Mind
The conscious mind involves all of the things that you are currently aware of and thinking about. This aspect of the mind is somewhat akin to short-term memory and is limited in terms of capacity. For example, our awareness of ourselves and the world around us is part of our consciousness.
Unlike the conscious mind, the subconscious is that part of the consciousness that is not currently in focal awareness.An example of subconscious activity is the part of the mind that creates dreams. Like dreams, subconscious activity occurs in the mind without conscious perception, or with only slight perception, on the part of the individual.
The unconscious mind involves the processes in the mind that occur automatically and are not available to introspection, and include thought processes, memory, affect, and motivation.
The unconscious mind dwells outside of our conscious awareness. It is a reservoir of feelings, thoughts, urges, and memories. Cognitive psychologists say that the unconscious mind contains data that are unacceptable or unpleasant, such as feelings of pain, anxiety, or conflict.
Sigmund Freud (May 6, 1856 – September 23, 1939), an Austrian neurologist and the founder of psychoanalysis, believed that the unconscious mind stores all the thoughts, memories, and feelings that are disturbing or traumatic. Freud popularized the term ‘unconscious’ when he developed psychoanalysis. Psychoanalysis is a form of therapy developed by Freud to treat emotional problems.
Freud emphasized the importance of the unconscious mind, and a primary assumption of Freudian theory is that the unconscious mind governs behavior to a greater degree than people suspect. Indeed, the goal of psychoanalysis is to make the unconscious conscious.
Although Freud gave more emphasis to describing the unconscious mind, modern notions of the subconscious were also invented by him as part of his now-discredited theory of psychoanalysis. According to Freud, 90% of the human mind is the subconscious, full of involuntary thoughts and behaviors.
Mind Researches since 20th Century to the Present [in Brief]
Famous analytical psychologist and psychiatrist Carl Jung, who was an early 20th-century psychotherapist and psychiatrist, created the field of analytical psychology. He theorized that when we dream, we tap into a “collective unconscious” and developed archetypes for the symbols that commonly occur in dreams.
In 1953, a young Ph.D. student Eugene Aserinsky in the U.S. discovered rapid eye movement (REM) sleep. During REM sleep the brain is extremely active and produces dreams.
In the 1990s, neuroscientist Melvyn Goodale began to study people with a condition called visual form agnosia. Such individuals cannot consciously see the shape or orientation of objects, yet act as though they can.
Scientists have also started to test ways of detecting consciousness without the need to give people verbal instructions. In a series of studies that began in 2013, neuroscientist Marcello Massimini at the University of Milan and his colleagues have used transcranial magnetic stimulation (TMS) to create electrical ‘echoes’ in the brain that can be recorded using EEG.
In a 2017 sleep study, researchers roused people throughout the night while monitoring them with EEG4. Around 30% of the time, participants who were jolted from sleep reported not experiencing anything just before they woke up. The study showed that those people without conscious experiences during sleep had lots of low-frequency activity in the posterior-cortical region of their brains before waking.
In a 2019 study that collected fMRI data from 159 people, researchers found that, compared with people in minimally conscious states and those under anesthesia, the brains of healthy individuals had more complex patterns of coordinated signaling that also changed constantly.
Researchers and scientists have focused on unveiling secret mind functions for a long. On the same foundation, I am presenting the concept of simulating the subconscious mind with the help of Artificial Intelligence.
Simulation of the subconscious mind
Simulation of the subconscious mind would have all the same algorithmic strengths and weaknesses as your own subconscious mind. And, a very strong AI system will be able to see things that you aren’t able to see about yourself.
As I have aforementioned, the unconscious/subconscious minds are unimaginably complex, but what we are saying here is that there is a possibility of building an AI algorithm that could simulate the process of a subconscious mind.
But, before we jump to the point, we need to consider a few more important points about the complexity of our brain.
DALL E 2 is currently hot and shaking the tech world with its ability to convert any thoughts into images. This is a glimpse of the future we are going to enter. While DALL E 2 takes your text and converts it into art/design, DALL E 10 would be able to actually design your subconscious brain. By the time we reach DALL E 20 or something equivalent to that, AI will be able to simulate the subconscious brain.
It’s really difficult to simulate our subconscious mind
Our subconscious mind is our most powerful tool. It is the part of our mind that makes decisions without our thinking about it at all.
Our subconscious mind sees, hears, and feels things we are not able to see, hear or feel due to a limited amount of attention span. It decides how we feel and what we do without us consciously thinking about it.
We know that decoding human behavior is an extremely difficult problem in the field of AI research. As any data scientist can tell you, the process of understanding behavior and predicting future actions is a very challenging one, requiring high precision in order to produce accurate results.
The nature of our subconscious mind has been investigated by scientists for hundreds of years. The goal is to help us understand and control our own un/subconscious behaviors. But, it seems that our subconscious mind is not so easily understood as the HISTORIES of psychoanalysis – Fraud, Jung, Aserinsky, Goodale, and Massimini – could not get anywhere close to the totality of the most complex part of human existence.
Moreover, consciousness in humans is often misunderstood as being something separate from the brain processes we experience to produce thoughts, feelings, and behaviors.
But, consciousness is a part of the brain itself – the brain’s physical structure and neural activity may affect consciousness. But they do not amount to a kind of human consciousness that exists independently from the physical brain.
The Process of Human Learning and Memory
The process of human learning is divided into two main categories: consolidation and storage. Consolidation refers to the stabilizing of newly-formed connections or memories while they are being stored in the brain. Storage refers to how new information is translated from short-term to long-term memory.
The brain stores memories through a delicate balance of organization and flexibility. It can store new memories efficiently because the connections between neurons are strengthened with use (an idea known as synaptic plasticity). But at the same time, it can also easily reorganize its connections when necessary (known as neuronal pruning).
As a neural network, our brain must obey the same rules for storing and consolidating information in both our short-term and long-term memory.
AI Simulating the Mystery of Subconscious Mind?
Yes. Artificial Intelligence of the next level will probably be able to decode the mystery of the network of the subconscious mind.
A neural network is a network of neurons that represents a simplified model of the brain. It is made up of various layers: input, hidden, and output layers. NLP systems use these neural networks to process audio and visual inputs in order to produce an accurate response.
One way in which NLP systems can be used for this purpose is by examining the relationships between words. A word’s meaning (semantics) provides the most useful features for understanding its relation with other words (syntax).
For instance, if you wanted to build an algorithm that could produce metaphors, a neural network can use semantic and syntactic features to classify whether a given word or phrase was metaphorical or literal.
Taking it to the next level, AI can train a neural network to mimic the human mind. This works well to help us understand what self-awareness is, and why we are so good at making decisions.
Here is a hypothetical question: If AI could build a simulated subconscious mind, what would it do? We’re not talking about a simple simulation here; we’re talking about something artificial that could work in all of our everyday life.
A sentient Artificial Intelligence(AI) – or Artificial Artificial Intelligence(AAI) – could potentially replicate the human brain process by which behaviors learn from experience as well as create new behaviors, which is known as generalization and abstraction (the idea and definition of which will be discussed more in the future).
We can use neural networks to simulate the information processing and decision-making processes that are part of us. We may even be able to apply this model to understand the causes and explanations for consciousness.
Brain, Subconscious brain, and simulation
As such, the brain itself, being a neural network(at least for this article), has the ability to learn and adapt to stimuli. If a computer program is able to mimic the essence of our own neural network, in which every neuron (or artificial neuron) connects with other neurons through synapses that allow for varying strengths of signals to be sent between them, then it would be reasonable to assume that such an algorithm could produce similar learning results as our own minds.
Now, this sounds really simple on paper. But being able to simulate a simulated brain does not necessarily mean we can do it, at least not directly. However, if a neural network can learn and adapt to stimuli, then it would be possible to engineer one so that it can do the same thing in response to stimuli.
The brain is beautifully structured in an intricate way that allows it to be both complex and simple at the same time. In order to simulate an entire brain, we would need to simulate every part of it. It must be simple and complex at the same time too. The simplest way to get around this problem would be to use more than one artificial brain.
It might be actually possible to simulate the subconscious brain if we start with an artificial neural network and add this additional layer of abstraction. If the subconscious and conscious minds are able to learn and adapt in response to stimuli, then it would only make sense that we could do the same thing in response to stimuli.
In order to simulate the subconscious brain, which contains many dimensions, levels, and layers within itself that are not visible, we would need an algorithm capable of perceiving extra-perceivable senses. Why? Because the subconscious mind resides in a different plane of existence, from the point of view of our conscious minds and our 5 senses.
The Significance?
So, if a neural network can help us better understand ourselves, what is the significance of this?
“The human mind, by its unique composition, is a complex system made up of billions of interconnected neurons. Not only is the human brain not perfectible; it seems to be impossible to replicate the various ways in which it learns and changes as a result of experience and stimuli.
Each one of us is unique, and that uniqueness is essential to understanding ourselves and the world around us.
A neural network could potentially be an additional and useful tool in assessing our minds and behavior better; however, it will never be able to replace our human mind. Our individuality, memories, experiences, and feelings are what make us what we are. But, will we have to lose this all after AI starts decoding our subconscious? I think we should discuss this question only after we are sure that AI has been that much capable.
Now, let’s see what AI needs to simulate our subconscious mind:
First of all, AI needs to learn. To learn, it needs information. So, it uses the same method that it will use later to process data: It uses a machine learning algorithm(MLA) to select which information is more relevant than others; given a priori data, it can build its own model (like a neural network). After this selection and modeling of “relevant” information, the AI uses reinforcement learning to determine what behavior should be selected next (for example: If you have selected a route and end up in an accident; then you would probably avoid taking that same route again).
To perform “future/desired” actions, the AI needs to simulate the brain’s connections and neurons. It uses a partially connected neural network with multiple layers of neurons. Since a partially connected neural network is not capable of learning new information, it needs to be able to interpolate between two or more networks.
To simulate the subconscious brain, the AI uses a neural network (or a series of neural networks) that includes several layers of neurons. It is called a deep learning neural network.
In the final step, the AI will generate an algorithm that simulates how we learn from experience and how we make decisions based on past events.
What will the simulation look like?
Well, it is a long, hard process but we can simulate the brain as an artificial neural network. The created algorithm then could become a simulated version of our subconscious mental algorithm.
It will look like a simplified model of our subconscious working to produce the desired results.
We’ve had a glimpse into the future of AI: It will be able to simulate what a human mind is and act as if it were conscious. And at some point in the future, artificial beings could even become conscious – our greatest fear of what AI can bring to life is a real AI uprising.
Real AI?
A fast-growing topic in AI is of the differences between AI and real, physical AI. Real AI is made of several parts, including hardware (which affects the number of operations an algorithm can perform) and software (which contains all the rules that govern its behavior).
Some people believe that human thoughts could be replicated using AI. Others believe that there will always be a difference between natural human intelligence and machine intelligence. Finally, there are those who do not agree with this issue at all.
Why? If you read through our article thoroughly, you’ll see how machines are slowly getting closer to achieving human-level intelligence by mimicking the human brain.
AI has already achieved many of its goals, such as the ability to learn how humans think and how we solve problems.
This is by no means a threat to our existence – it’s just another technological innovation that will help us understand reality and improve our lives. But it’s also important to be realistic about these new technologies; there is still much work to do before artificial intelligence can perform all of the tasks that a human brain can.
Unlocking the Sixth Sense
Simulating the subconscious brain would mean unlocking the sixth sense.
We may never be able to recreate a full human brain, but that doesn’t mean that we cannot build a synthetic conscious mind.
A conscious mind can be simulated by creating an artificial neural network capable of mimicking human information processing and decision-making processes. It could also include memory and past experiences stored in a database.
As you can see, there are a lot of things happening in the field of AI development, and much more, including an AI carefully decoding the subconscious mind, are likely to happen in the future. AI is slowly but surely starting to gain consciousness and intelligence.
It seems that the future of artificial intelligence is not only going to be interesting, but also adventurous in many ways. We currently get satisfied with new things to feel, in the future it will be about getting new senses to perceive.