Author: Sam H

  • Researchers use AI to make Belgian beer taste better

    Researchers use AI to make Belgian beer taste better

    Researchers at KU Leuven University have started a project to make Belgian beer even better using artificial intelligence. Led by Prof Kevin Verstrepen, they’re studying how we taste flavors to understand the different smells in beer.

    The researchers experimented with 250 Belgian beers of different kinds to see what makes them taste the way they do. They checked things like how much alcohol they have, how acidic they are, and what flavors are in them.

    “Tiny changes in the concentrations of chemicals can have a big impact, especially when multiple components start changing,” Prof. Kevin Verstrepen, of KU Leuven university, who led the research, said.

    Then they asked a group of 16 people to taste the beers and describe how they tasted. This took three years! At the same time, they collected 180,000 reviews of different beers from the online consumer review platform RateBeer and looked at what people said about these beers online.

    The researchers used all this information to make computer programs that can predict how a beer will taste based on what’s in it. They then used these predictions to make existing beers even better by adding certain flavors.

    “The AI models predict the chemical changes that could optimise a beer, but it is still up to brewers to make that happen starting from the recipe and brewing methods,” Prof. Verstrepen said.

    The results were amazing. People liked the beers even more, saying they tasted sweeter and fuller. But even with this new technology, Verstrepen reminds us that the skill of the brewers is still very important.

    AI’s changing a lot of things. But in the context of beer, it’s combining old traditions with new ideas to ensure Belgian beer stays as great as ever.

  • Researchers introduce RAmBLA as a holistic approach to evaluating biomedical language models

    Researchers introduce RAmBLA as a holistic approach to evaluating biomedical language models

    In today’s advanced tech world, Large Language Models such as GPT-4 and LLaMA 2 lead the way in understanding complex medical terms. They give clear insights and provide accurate info based on evidence. These models are important for medical decisions, so it’s vital they’re reliable and precise. But as they’re used more in medicine, there’s a challenge: making sure they can handle tricky biomedical data without mistakes.

    To tackle this, we need a new way to evaluate them. Traditional methods focus on specific tasks, like spotting drug interactions, which isn’t enough for the wide-ranging needs of biomedical queries. Biomedical questions often involve pulling together lots of data and giving context-appropriate responses, so we need a more detailed evaluation.

    That’s where Reliability Assessment for Biomedical BLM Assistants comes in. Developed by researchers from Imperial College London and GSK.ai, RAmBLA aims to thoroughly check how dependable BLMs are in the biomedical field. It looks at important factors for real-world use, like handling different types of input, recalling info accurately, and giving responses that are correct and relevant. This all-around evaluation is a big step forward in making sure BLMs can be trusted helpers in biomedical research and healthcare, according to the researchers.

    “. . . we believe the aspects of LLM reliability highlighted in RAmBLA may serve as a useful starting point for developing applications for such use-cases,” the paper, authored by researchers including William James Bolton from Imperial College London, reads.

    RAmBLA framework emerges as a holistic approach to evaluating biomedical language models
    Screenshot Credit: arXiv.org

    What makes RAmBLA special is how it simulates real-life biomedical research situations to test BLMs. It gives them tasks that mimic the challenges of actual biomedical work, from understanding complex prompts to summarizing medical studies accurately. One key focus of RAmBLA’s testing is to reduce “hallucinations,” where models give believable but wrong info – a crucial thing to get right in medical settings.

    The study showed that bigger BLMs generally perform better across different tasks, especially in understanding similar meanings in biomedical questions. For example, GPT-4 scored an impressive 0.952 accuracy in answering open-ended biomedical questions. However, the study also found areas needing improvement, like reducing hallucinations and improving recall accuracy.

    “If they have insufficient knowledge or context information to answer a question, LLMs should refuse to answer,” the study report claims.

    Interestingly, the researchers discovered that bigger models were skilled at recognizing when to avoid answering irrelevant questions. On the other hand, smaller ones like Llama and Mistral struggled more, suggesting they require additional adjustments for better performance.

  • Smart tattoos that monitor health metrics and vital signs

    Smart tattoos that monitor health metrics and vital signs

    Tattoo technology has evolved beyond mere self-expression, as it is increasingly recognized as a reliable tool for tracking health metrics and vital signs.

    Smart tattoos are becoming more advanced in tracking health signs, studies show. MIT scientists have made ink for smart tattoos that contains tiny particles. These particles can detect changes in pH levels, which can signal issues like dehydration or metabolic disorders. Also, new flexible electronics allow for smart tattoos that monitor muscle activity, helping with physical performance and recovery. These developments highlight how smart tattoos could change healthcare by monitoring vital health signs easily and without being invasive.

    These tattoos, acting like miniature labs on the skin, offer real-time insights into important health indicators like blood pressure, glucose levels, and hydration. By integrating biosensors and special materials, they combine biomedical science with tattoo artistry to revolutionize health monitoring.

    Smart tattoos that monitor health metrics and vital signs
    At his Imperial College London lab, Ali Yetisen demonstrates a stamp on his arm created using tattoo ink that glows under certain light. Description/Photo Credit: CNN

    The idea of a “Lab on the skin” allows for painless monitoring of bodily functions without invasive medical devices. Pioneers like Ali Yetisen from Imperial College London have developed smart tattoos such as the DermalAbyss, which change color in response to changes in bodily fluids, providing continuous updates on pH levels, sodium, and glucose.

    Furthermore, smart tattoos can address external health factors like UV exposure, as shown in Carson Bruns’s research on “solar freckles,” potentially aiding in skin cancer prevention. They also offer promise in cancer treatment by providing a less intrusive alternative to traditional radiation markers.

    Smart tattoos that monitor health metrics and vital signs
    Carson Bruns (Photo Credit: colorado.edu)

    In 2020, Carson Bruns, an assistant professor of mechanical engineering at the University of Colorado Boulder, contributed to a team that created the “solar freckle,” a light-sensitive tattoo. It appears in sunlight, signaling excessive UV exposure, and fades when sunscreen is applied or when out of sunlight. Bruns has received a prestigious National Science Foundation CAREER Award for research that investigates how the art of tattooing can incorporate the latest advances in nanotechnology to improve human health.

    Smart tattoos have big potential applications, from personalized healthcare to space exploration. Human trials are ongoing, indicating progress toward regulatory approval and widespread use. Compared to wearable devices, smart tattoos offer unmatched convenience and permanence, along with being unhackable and not requiring batteries.

  • S24 Ultra’s S-Pen Smell is Real (The Problem isn’t your nose)

    S24 Ultra’s S-Pen Smell is Real (The Problem isn’t your nose)

    If your Samsung galaxy S24 Ultra’s S Pen smells like a burnt plastic, you are not alone. The S-Pen dilemma has sparked numerous discussions in online community forums, with users expressing various concerns. Some even went as far as to fear that the house was burning, before learning the truth.

    Samsung EU’s moderator, AndrewL, who has been a member of the community since 2018, officially stated the following:

    This isn’t anything to be concerned about. While the S Pen is in its holster, it is close to the internal components of the phone, which will generate heat while in use, and cause the plastic to heat up. This can smell like burning, but it is similar to the smell you might experience after leaving your car in the sun for a few hours. The seats and plastic fittings in the vehicle might smell hot, but this will diminish after it cools. 

    The S Pen was promoted as offering “the magic of touch-free control.” But who knew that alongside its 0.7mm pen tip and 4096 pressure sensors, it would also come with a bonus fragrance experience straight out of a sci-fi flick?

    Now that the discussion regarding S24 Ultra’s pen has come up, numerous users have also taken note of similar olfactory experiences with previous iterations of the S Pen.

    Amidst any laughter and jest, it’s essential to recognize the potential serious implications of such incidents. A burning smell can often signal overheating or malfunctioning electronic components. Regardless of whether or not you have an S Pen, it’s important to watch out for that.

  • Humanoid Robots are Already Taking Over Human’s Job

    Humanoid Robots are Already Taking Over Human’s Job

    Key Points:

    The world’s first humanoid robotics mass production factory was opened in Turkey in December 2017.

    Tesla, a US company best known for its electric cars, has started to broaden its empire.

    Tesla is putting some of its upcoming projects on hold and directing resources toward the development of its humanoid Bot. The company is seeking talented programmers to develop software for the Optimus robot.

    Tesla is holding Tesla AI Day, its second annual event, on September 30. It is rumored to be presenting a working prototype of the humanoid robot Optimus at the event.


    In October 2017, Sophia, David Hanson’s invention in 2016, was given Saudi Arabian citizenship and became the first humanoid robot to receive citizenship from any country.

    Factory owners now prefer to use humanoid robots as they not only increase speed for manufacturing processes, in part by operating 24/7, but also conserve funds, time, materials, and space while at the same time increasing production and product quality.

    While the world’s first humanoid robotics mass production factory was opened in Turkey in December 2017, Tesla, a US company best known for its electric cars, has unexpectedly changed course recently, putting some of its upcoming projects on hold and directing a surprising amount of resources towards the development of its humanoid Bot.

    Tesla aims for thousands of humanoid robots to work in its factories

    Tesla is aiming to develop humanoid bipedal robots at scale in the expectation that hundreds of these bipedal machines will eventually work in its factories. These robots will automate repetitive and tedious jobs.

    Although the maker did not provide a specific timeline, it explicitly admitted that it intended to employ thousands of them in a new job posting on its official website where it is seeking talented programmers to develop software for the Optimus robot.

    The motion planning stack, which is the core of the Tesla Bot, offers a unique opportunity to work on cutting-edge motion planning and navigation algorithms that will eventually be deployed to real-world production applications. From idea to deployment, this stack is developed and maintained by their motion planning software engineers. Most importantly, Tesla notes that thousands of humanoid robots working in their factories will regularly collect your work and apply it. 

    The company is holding Tesla AI Day, its second annual event, in less than a week on September 30. Tesla is rumored to be presenting a working prototype of the humanoid robot Optimus at the event, though this has only ever been hinted at and not confirmed.

    Elon Musk stated in a tweet from the start of June that the Tesla AI Day #2 program had been delayed from its originally scheduled date of August 19 to the end of September to allow the company time to get the prototype ready for a public display. Moreover, Tesla will go about its most recent developments in AI, Full Self-Driving, the Dojo supercomputer, and more.”

    Human workers vs. Humanoids

    Factory engineers around the world have been complaining for many years that human workers are not as productive and efficient as they would like them to be, so one of the main reasons why they are investing in humanoid robotics is that they expect them to be better than human workers.

    The first suggestion of a workforce made up of robots was made by Ford Motor Company CEO and founder Henry Ford over one century ago, back in 1914. Ford argued that machines could complete a lot more tasks than workers since once designed for such work, the machines would do it faster, less expensively, and with more quality.

    If this trend of using robots for production continues, the number of workers in manufacturing industries will most likely be reduced.

    People have now started questioning “When”, more than “if” and even “What if”. Substitution of human labor by robots in the manufacturing sector is inevitable. Its consequences are yet to be imagined.

  • Artificial Intelligence Predicts Odors Like Humans

    Artificial Intelligence Predicts Odors Like Humans

    Humans detect smells by inhaling air that contains odor molecules, which then bind to receptors inside the nose, relaying messages to the brain, whereas, AI interprets the signatures and classifies them based on a database of previously collected smells.

    Unlike the human eye, which has only three sensory receptors for sensing the photons of red, green, and blue color, the human nose has over 300 receptors, making it more difficult to predict smell than color.

    A 2014 study showed that humans can distinguish at least 1 trillion different odors. But you may wonder how an artificial intelligence will be able to handle the impossible. These are just initial steps in AI systems, which are being converted more into human-like social individuals all over the world in different areas.

    In the recent times, Google has built an AI model with a human-like capacity to predict odors. Scientists have successfully formulated a “Principal Odor Map” (POM) with the properties of a sensory map. The map developed by the team of Google AI links molecular structure to the aroma of substances and can even predict smells that are still unnoticeable by humans.

    The Google researchers began the research in 2019 using a deep learning algorithm. The type of smell interacted with the molecular structure. Various samples of specific molecules were trained to be identified by a graph neural network (GNN) model along with the smell labels they evoke, such as beefy, floral, or minty.

    Researchers looked into whether the GNN model could learn to predict the odors of new chemicals that people had never smelled before and that were different from the molecules used to train it. The researchers referred to the study as an “important test” in their Google post. Many models work well with data that resembles data they have previously seen, but often fail when tested with new data.

    The Google model was successful and demonstrated exceptional intelligence in predicting smell from molecule structure.

    The model was also tested to see whether it could predict how animals would perceive odors. They found that the map was capable of precisely predicting the activity of sensory receptors, neurons, and behavior in most of the animals that olfactory neuroscientists have studied, including mice and insects.

    AI detecting odors can be used for a variety of tasks, including identifying scents in the environment to aid people who have lost their sense of smell; and creating new artificial scents.

    The research team at Google found that the common application of the sense of smell may be to detect and distinguish between various metabolic states, such as knowing when something is ripe vs rotten, nutritious vs inert, or healthy vs sick.

    They had gathered data about metabolic reactions in dozens of species across the kingdoms of life and found that the map corresponded closely to metabolism itself.

    The scientists retrained the model to tackle the issue of the spread of diseases transmitted by mosquitoes and ticks while killing hundreds of thousands of people each year.

    The team improved the original model with two new sources of data. The first set was a long-forgotten set of experiments conducted by the USDA on human volunteers beginning 80 years ago and recently made discoverable by Google Books. Secondly, a new dataset was collected by their partners at TOPIQ, using their high-throughput laboratory mosquito assay.


    With the help of the POM, researchers hope to predict animal olfaction to better respond to the deadly diseases transmitted by mosquitoes and ticks. Both datasets measure how well a given molecule keeps the mosquitos away. Together, the resulting model can predict the mosquito repellence of nearly any molecule, enabling a virtual screen over huge swaths of molecular space.

    “Less expensive, longer lasting, and safer repellents can reduce the worldwide incidence of diseases like malaria, potentially saving countless lives,” said the researchers.

    According to their findings, a Principal Odor Map might be produced using the researchers’ method of smell prediction in order to tackle odor-related issues more widely. The key to measuring smell was in the map. It provided answers to a variety of questions regarding new odors and the molecules responsible for them. The model also linked the evolution of odors to the natural world.

    To sum up, AI algorithms do have a potential to effectively predict smell. The Google model was one of the first to demonstrate this.

  • A Model of the Universe That can Predict What Comes Next

    A Model of the Universe That can Predict What Comes Next

    A heavy-duty model of the universe that can predict what comes next just landed on our concept desk and we really, really want to tell you all about it. All the phenomena in this physical world are guided by certain patterns, whether you observe them or not.

    Recognizing those patterns can help us predict the future with a fair amount of certainty. Also, having this ability gives us a better understanding of how things work in general and how they are interconnected.

    The hypothetical model in question is a simple and effective one that we were able to arrive at after combining two different general models known as “probability space”, a mathematical construct that provides a formal model of a random process or “experiment”, and “quantum theory”. It is designed to work with any kind of atomistic system.

    Combination of “probability space” and “quantum theory”

    “Everything in the universe is connected to each other” is the notion that the probability space model incorporates. It combines the quantum and classical worlds together.

    Quantum theory, a physics model, employs probability theory to explain phenomena and predict possible outcomes of experiments. Probability space combines both these theories, explaining how things related to each other in the universe work by giving more weight to the uncertainty aspect of things rather than facts, which are less likely to change over time.

    This model is based on a theoretical development known as “The Many Worlds Interpretation” ((MWI) of quantum mechanics holds that there are many worlds that exist in parallel in the same space and time as our own. It came about after the concept of superposition, which describes a system being in multiple states at once. That was the basis for the Schrödinger’s Cat thought experiment.

    Many physical systems in nature can be approximated by a collection of interacting matter particles. According to quantum mechanics, particles can exist in multiple states at once. This means that each particle can be in several places at once or have several possible values for its energy level

    In order to predict what comes next, this model uses the probability of associated situations to infer a continuous future. It predicts how the systems will behave in the future and makes predictions that match experimental results.

    For example, if you see a person in the distance and their face, hair, and clothes match those of a person you know, the model predicts that they are likely to be the same person. If they do not match this person’s face, hair, or clothes, then the probability is low that it is the same individual.

    Prediction accuracy of “what comes next”

    This model is able to predict the future very accurately by employing probabilities of probabilities from the beginning. It also incorporates all the possible outcomes of the system, which is similar to what has been done in quantum mechanics.

    The model has proven itself to be successful in predicting various kinds of events, such as how an atom will behave when it decays after exposure to radiation. This can help researchers make improvements to their equipment and find new uses for the technology. 

    The model predicts a continuous future without requiring knowledge about its past incidents, which makes it extremely useful for science and engineering research.

    In conclusion, the model that predicts what comes next has boundless applications because of how it incorporates everything in this world into one model.

    For example, calculating the value of time can help you decide whether to drive or fly, save or borrow. You might even use it to figure out what career path you should take.

    While the model is a theoretical development, researchers see it as a way to help us predict future events in general by using probability theory.

    By using this model, we can also better understand how things are connected in this world and how they work, and that will help us come up with unique, better innovations and improvements.

  • Can you create “Artificial Senses” in the lab?

    Can you create “Artificial Senses” in the lab?

    The five human senses, through which our brain receives information, analyzes, and makes sense of the information, are the next-level-creation of the “nature lab”.

    And the current trend of research in Artificial Intelligence, Machine Learning, Robotics, and Humanoids may also have made you curious about if scientists are experimenting to create human senses in the lab. 

    If so, you are right, because the AI of the next level will certainly require its own senses, which it cannot borrow from any human. After scientists create senses for them, they will be able to use their own brains to create additional senses and come up with new wisdom. 

    How is it possible to create senses in the lab?

    Artificial senses, but as powerful as the natural human senses, can pave the way for AI to create its own intelligent and wise systems. They can be used as a test bed for years in the lab before they make it onto the market. 

    Artificial Intelligence (AI) focuses on using computers and software to simulate human intelligence. The goal of AI engineers has been to build systems that are able to think, act, learn, and even feel like humans.
    If you explore various areas of research on artificial intelligence, you will find that researchers are working towards creating an AI model to have a sense of sight, hearing, or taste, along with analyzing patterns among them and then making sense of them. 

    Then there are those scientists who aim to give AI its own “self-awareness” and “emotional intelligence”. The goal is to create humanoids like Sophia, who debuted in 2016. Sophia is considered the most advanced humanoid robot. Starship Delivery Robots, Pepper Humanoid Robots, Bearrobotic Restaurant Robots, Nimbo Security Robots, and Shadow Dexterous Hand are the top 5 advanced AI robots in 2022.

    But these robots do not have any sense like humans. They just go along according to the program. The extent of their function depends on the architecture of the system.

    The reason why scientists are working towards creating senses for AI is that they want to make it smarter

    Prerequisites to create senses

    To create a sense, it is essential to have a system that can recognize patterns among other senses and then act accordingly to make sense of the data. 

    Synapses and neural networks – former: the connection between nodes, or neurons, in an artificial neural network (ANN) and latter: a machine learning process called deep learning respectively – are considered to be the fundamental building blocks of artificial intelligence. These are the basic components that make up the brain’s signals and decisions. The mind of a human or a robot consists of these neurons connected to one another in different patterns. 

    Despite the fact that scientists are experimenting with both artificial intelligence (AI) and machine learning, it’s nothing but an enhanced version of artificial neural networks in order to make devices more intuitive, efficient, and intelligent. These components have been continuously making progress over time because they work on multi-variate algorithms that can learn multiple models at once.

    The complexity of our senses

    However, creating senses in a lab can be a great challenge. It is because each of the five human senses consists of different neurons arranged in a unique pattern in the neural network. 

    The sense of sight consists of the sensory organ (the eye) and parts of the central nervous system (the retina containing photoreceptor cells, the optic nerve, the optic tract, and the visual cortex) and other associated structures for processing the image. 

    The primary auditory receptor cells, or inner hair cells in our ears, convert sound waves into an electrical signal that is transmitted to the brain through an auditory nerve, which then reaches the auditory cortex to process them further. 

    In our mouth, taste buds are responsible for sensing salty, sour, sweet, and bitter tastes through gustatory hairs present on them while they also connect to a part of the brain called the “gustatory cortex.”

    The sense of touch consists of 1) single- and mechanoreceptor cells as well as 2) glabrous (skin) epidermous and 3) hair cells for detecting dynamic stimuli. 

    The sense of smell is made up of 1) olfactory epithelium, 2) olfactory receptors and the olfactory bulb, 3) olfactory cortex, 4) amygdala and hippocampus, 5) hypothalamus, and 6) thalamus. 

    Challenges in creating senses in a lab

    You can see how complex the process is of creating a completely new sense in a lab. You will have to start by identifying the key neurons and their connections and then make the necessary modifications, build new ones, or add some new sensory structures.

    These structures will then be inserted into the appropriate areas of the brain for processing. After making all these, you should place them in a bioreactor with appropriate medium conditions as well as target cells and tissues for interaction.

    However, the most difficult part is to make the brain accept all the new additions. It will need time, patience, and determination from you to be able to do this. Besides, this is not an easy task for those who have never worked in the field.

    In order for the sense of touch to be created in a lab, you should start by identifying the key neurons and their connections that are responsible for detecting static stimuli such as pressure, vibration, or pain.

    Similarly, a sense of sight can also be created with the help of an optogenetics approach. This involves the release of light-sensitive proteins in the neurons after stimulating them with a specific wavelength of light.

    This approach could then be used in the lab to create sensors such as photoreceptors and photodetectors that will allow you to respond to external stimuli in a manner similar to how they work in nature.

    In addition to these, one should also acknowledge that they will have to identify the key neurons present in other species that can be used for creating senses as well as parts of the central nervous system that are responsible for processing them.

    Risks associated with creating senses

    But the most vulnerable part of artificial intelligence is its “senses” mechanism, which can cause fatal accidents if malfunctioned or attacked by hackers.

    For example, in a nuclear disaster zone, robots with nuclear plant inspection systems can be sent to assess the damage and clean the radiation by using radio frequency identifiers. But if a hacker steals the access code, he/she can take control of these robots and cause further damage.

    Similarly, in order to create artificial senses in a lab, a neural network must be trained to recognize patterns among other sensors. The most vulnerable part of this mechanism is that it can be hacked. 
    Ill-minded people may acquire temporary access to it and use it either for personal gain or as part of some political agenda. If so, they can create problems at any level, including assaults on targeted individuals or public and critical infrastructure systems.

    In order to guarantee the security aspect of artificial senses in the lab, they will have to train the artificial senses about precaution and protection. 

    These sensors must be developed and implemented while keeping in mind their full potential and how they can be used, as well as how they can be prevented from misuse. 

    Before you create senses in a lab, you, therefore, need to be sure that:

    • Senses (sensors) will include, but not be limited to sight, hearing, taste, and touch. 
    • Sensors should be protected against hacking and the use of malicious code to prevent misuse.
    • Sensors should also be protected against accidental discharges as well as any potential harm from natural disasters.
    • The system needs to be designed in such a way that it will generate the required result without causing an accident in the process.
    • It should be able to accurately detect the type of pollution and its extent. 
    • Artificial senses should have a physical system capable of linking with devices such as cameras and sensors in order to increase their processing capacity. 
    • This can enable better processing time, accuracy, and performance.

    To sum up, artificial senses are the next step for dynamic and smart robotic devices, for which you will require more precision, speed, and accuracy to be able to create them in a lab. The level of stability, power consumption, and energy efficiency should be on par with that of organic systems. 

    However, due to the complex nature of these systems, a host of developing challenges are being faced by researchers in this field. These include the integration of artificial senses into robotic systems, while they can also help scientists to understand how real senses work as well as provide solutions to the various problems related to artificial senses.
     

  • VR Simulating Minds, Changing Realities

    VR Simulating Minds, Changing Realities

    Mind simulation based on virtual reality is the next big thing for mankind. We can use VR for anything from a movie theater to a tool for physical rehabilitation. But there are more aspects of VR that go beyond entertainment. It can help people understand themselves and their world better. Not limited to just that, VR can simulate minds like never before, and change different realities entirely.

    When exploring these ideas in greater depth, however, one quickly realizes just how complicated the topic of VR truly is.

    Minds are powerful things. We use them to create entire worlds within our heads, to imagine the impossible. We use them to dream the dream that never was. These universes can be as vast as their creator’s desire; from the mundane, day-to-day life of a retail worker in London, England all the way up to exploring unexplored galaxies at speeds faster than light.

    But what happens when these worlds cross over into reality? What if we could use VR to simulate minds or change our realities? This is where scientists and futurists collide: could VR, the tool of our ever-changing ambitions, become so versatile that it could change how we think about the very nature of reality itself?

    Thus far, the answers remain inconclusive. There are some interesting things happening in this field of research. Researchers are investigating ways to make VR equipment act more like a biological brain when stimulated. The idea is that VR equipment might need a sort of “mental exercise” to function properly. Essentially, virtual reality needs to feel like life in order for it to be fully realized and to feel “real.”

    The other part of this theory is that VR equipment functions much more like an animal simply by being exposed to the stimulus. Given proper “training” by its users, it can become infinitely more versatile and complex than it would be otherwise.

    There are countless examples of VR already improving the lives of people all around the world. The most obvious one is rehabilitation, whether for physical or mental training.

    A study was done in which many wheelchair-bound people sat in a virtual reality environment with an avatar that walked around and explored things just like a normal human being. In doing so, these wheelchair users were no longer limited to their chairs; they could walk freely and explore new environments from their own perspective instead of from the perspective of someone else.

    Related Video

    Futuristic AI-powered virtual reality technology will unlock even more potential for freedom in a virtual world, especially for those who are physically challenged. AI will take care of the brain simulation part, and the hardware would be VR eye lenses.

    In order to simulate worlds inside VR, artificial intelligence is certainly going to play an enormous role. With the help of machines, it will be possible for us to simulate limitless and incredibly complex worlds. Therefore, we might come to see more and more virtual worlds such as these in our reality. As VR becomes more and more popular, the dreams we set for it will only increase.

  • Silicon chips may replace living things like neurons and cells in AI systems

    Silicon chips may replace living things like neurons and cells in AI systems

    This may sound like a dystopian science-fiction future, but the development of sophisticated machine learning algorithms will eventually enable silicon chips to replace living things just like neurons and cells in AI systems.

    What we think is going to be really amazing, and we’ve already seen this with some of our chips as well, is that the same chip can be implemented in a material that has similar properties to neurons and synapses.

    For example, using flexible organic materials that eventually might even be capable of communicating with biological neurons, scientists are attempting to replicate the extraordinary capability of the human brain, which can process and store information a thousand times faster than the fastest supercomputer.

    In the future, silicon chips will be able to imitate living biological systems to such an extent that AI systems will be capable of generating human-like intelligence and even consciousness.

    When we are able to train a silicon-based life to see, speak, hear, and generate human-like emotions, it is also likely to be programmed to become self-aware and use its intelligence to design more powerful chips that will enable it to think even faster.

    Then the real race for silicon will begin.

    How can silicon chips replace living things?

    Silicon, a tetravalent nonmetallic element that is combined to form the second-most abundant element in the earth’s crust after oxygen, is perfect for this task because it can be programmed to imitate all of a living system’s functions.

    On the periodic table, silicon comes below carbon. It can also form four covalent bonds with its four adjacent atoms. And for this reason, silicon may be used to make complex molecules just as easily.

    Akin to the biological neural network in the human body, artificial neural networks can even make decisions without the original biological input.

    When researchers made living cells from carbon-silicon bonds in 2016, it proved for the first time ever that nature could include silicon into the building blocks of life. This further showed that carbon traces might not be the only signs of life we should be looking for.

    Based on the same discovery, scientists believe that understanding silicon-based life has the potential not only to replace living things on Earth but also to be a missing piece in other parts of the universe.

    The next evolutionary step of silicon-based life is therefore to create artificial neural networks similar to those found in biological systems.

    General procedure and probability

    By going through a sophisticated procedure to generate silicon chips, engineers basically look into this technology, intended to stand in the place of living things, by using biological simulators on silicon-based neural networks.

    In 2019, a team from the University of Bath has been able to get silicon neurons to replicate the function of a human brain system, which means that these silicon chips will one day be capable of mimicking the human brain and its functions. Scientists then made artificial nerve cells, paving the way for new ways to repair the human body.

    The tiny “brain chips” behaved like the real thing, and the chip design had to come up with a way to replicate in circuit form what nerve cells (neurons) do naturally. For example, neurons “carried signals” to and from the brain and the rest of the body.

    The research necessarily does not stop just at the point of making artificial nerve cells. On the basis of ongoing research on the topic, the use of silicon in creating silicon-based living things can be comprehensively enhanced even further, showing the initial results probably in the very near future.

    Debates on silicon-based life system

    Scientists have been debating the prospects of the issue along with the first scientific proposal for silicon-based life, which extends back to the ideas of German astrophysicist Julius Scheiner in 1891.

    However, as a team of MIT astrobiologists recently pointed out, no one has systematically and comprehensively assessed silicon’s capacity to support life in both a terrestrial environment and plausible non-terrestrial settings. They tackled this problem in a 2020 review article published in the journal Life in which they presented a detailed evaluation of silicon’s life-support capacity.

    The team from MIT noted that any life-supporting chemical element must display sufficient chemical diversity. This chemical diversity is required to produce the chemical complexity necessary to generate the diverse collection of molecular structures and chemical operations required to originate and sustain living systems.

    One argument for why silicon chips will start replacing living things sooner rather than later may be that “silicon is better than carbon.” Silicon shares a number of similarities with carbon, particularly in the way they combine with other elements to form complex molecules. However, silicon bonds strongly with oxygen, and in many cases, silicon compounds have higher melting points than their carbon counterparts.

    For example, silicon-oxygen bonds can withstand temperatures as high as ~600 K and silicon-aluminum bonds at nearly 900 K. But, carbon bonding of any type breaks down at such high temperatures, making carbon-based life impossible.

    Will silicon-based life be better?

    Silicon-based living things will invariably be better than living things that are currently on Earth. The new life forms will be able to tolerate much higher levels of radiation than even the hardest rocks on earth. Also, silicon as an element is incredibly stable as it does not form reactive bonds with other solids at ordinary pressures and temperatures.

    If we compare this fact with the chemical diversity of silicon and regard it as an essential requirement for silicon-based life, it could lead to the conclusion that silicon must be capable of producing a large spectrum of living systems containing hundreds of thousands of chemical species.

    The MIT discovery is extremely important for mankind because, consequently, silicon has to acquire the necessary living system characteristics to ultimately play a crucial role in the development of artificial intelligence and in the creation of an entirely new line of life.

    When silicon chips replace living things completely or even partially, we will start seeing that everything we know now can no longer be considered human. Like simulated neurons AI systems, silicon-based life will be interconnected, that with artificial intelligence underpinning everything.

    Silicon can form a huge number of possible compounds, which are not found in the carbon world, and this will produce a very rare form of super life.

    The ‘now’ is pointing towards the “same”.