Author: Tom Clarke

  • Does our brain run binary codes, hosts universes?

    Does our brain run binary codes, hosts universes?

    Does our brain run binary codes? In contrast to the conventional wisdom, some people believe that our brain does indeed use complex binary codes with neurons, while a group of them believe that the codes are not nearly as complex as some make them out to be.

    So, what is the truth? Do our brains use complex binary codes with neurons?

    There is no easy answer to this question. Scientists are still trying to figure out exactly how our brains work and how we think. However, there is some evidence that suggests that our brains may use complex binary codes with neurons.

    For example, some studies have shown that certain types of neurons fire in a particular pattern when we are performing certain tasks. This suggests that there may be some sort of code that the neurons are using to communicate with each other.

    Additionally, some computer models of the brain have shown that complex binary codes may be necessary for the brain to function properly. This is because the brain is a very complex system and it may be difficult for it to function without using some sort of code.

    Of course, this is all just speculation at this point. Scientists are still trying to figure out exactly how our brains work. However, the evidence that has been found so far suggests that our brains may use complex binary codes with neurons.

    It is thought that neurons communicate with each other by releasing chemicals called neurotransmitters. These neurotransmitters attach to receptors on other neurons and either excite or inhibit them.

    Inhibitory neurotransmitters tend to decrease the activity of the neuron they are attached to, while excitatory neurotransmitters tend to increase the activity of the neuron they are attached to.

    This chemical communication between neurons is known as synaptic transmission.

    So how does this all relate to binary codes?

    Well, it is thought that each neuron can be in one of two states: excited or inhibited. This is similar to the way that a computer bit can either be a 0 or a 1.

    It is believed that when a neuron is excited, it will fire an electrical impulse. This impulse will then travel down the neuron’s axon to the synapse.

    At the synapse, the electrical impulse will cause the release of neurotransmitters. These neurotransmitters will then attach to receptors on other neurons and either excite or inhibit them.

    In this way, it is thought that information is transmitted between neurons through a complex system of binary codes.

    Being non-deterministic, the brain is unable to precisely reproduce instruction sequences. Therefore, the brain is undoubtedly not “digital” in any of these respects.

    The impulses that are transmitted throughout the brain are simultaneously binary-like “either-or” states. A neuron can either fire or not. The basic language of the brain is comprised of these all-or-nothing pulses. In this way, binary signals are similar to how the brain computes. The brain employs “spike” or “no spike,” not 1s and 0s, or “on” and “off” (referring to the firing of a neuron).

    Is our brain literally hosting universes by itself?

    There is one thing all can agree on with this article: the brain is a very complex organ.

    Now, let’s spread up. The brain can also create new universes and realities through its ability to imagine and visualize. It will take a lot of research and time to fully understand how it works.

    The answer may surprise you. Our brains are capable of running complex binary codes, but they don’t necessarily do so on their own. In fact, they may be running these codes in tandem with other universes.

    Here’s how it works: Our brains are constantly taking in information from our senses. This information is then processed and stored in our memory. When we need to recall this information, our brain accesses it and uses it to generate new thoughts and ideas.

    In order to run complex binary codes, our brain relies on memory recall. This process is similar to how a computer accesses and runs programs. When we need to recall a certain piece of information, our brain accesses it and uses it to generate new thoughts and ideas.

    However, our brain is not limited to storing information in memory. It is also capable of storing information in other universes. This is how we are able to remember things that we have never experienced before.

    For example, when we imagine a new world, we are actually storing information in another universe. This is how we are able to come up with new ideas and creativity.

    Our brain is constantly running simulations of universes and realities. These simulations are created by the firing of neurons in our brains. This theory is supported by the fact that we often have dreams and hallucinations that seem very real.

    The answer?

    So, does our brain run complex binary codes and host universes on its own? The answer is yes!

    Some nonsensical evidence exists to support the idea that our brain is not a computer. For example, our brain is capable of emotions and creativity, which are not typically possible with computers. Additionally, our brain is not a static entity; it is constantly changing and growing, which is another trait that is not typically associated with computers.

    Think of your brain as a tool. It’s a powerful, complex tool that is capable of running complex binary codes, hosting multiple universes, and creating amazing new realities.

    We are only at the beginning stages of understanding how our brains work. It could take many years for neuroscientists to unravel all the mysteries of our minds. Who knows? Maybe one day we will find powerful quantum computers in our brains.

    If you take a closer look at how the brain works, you’ll find that its operations are far more complex than simply running binary code. In fact, we have only begun to scratch the surface of how the brain works.

  • Eye-lens software to give you a “super-human” vision?

    Eye-lens software to give you a “super-human” vision?

    Introduction

    Imagine your eyes wearing lenses the size of your eyeballs, being able to magnify and focus on anything you see in front of you, or maybe in front of a guy located miles away. Say it in a single word: “future.”

    This technology would have applications far beyond what we have yet to think about. Gamers would be excited to get a 10x scope built into their eyes. But there is more to it than this.

    Compressed sensing is a process that allows the user to see additional information beyond the traditional human experience. A further fascinating possibility would be to design probes that can detect and record subtle details in light patterns. But how can one make this eye-tracking software in the first place? And, how could it give you a ‘super-human’ vision?

    We’ve all heard of some people being color blind and how the condition impedes their color vision, but how many of us have heard of tetrachromacy? This is a condition affecting your color vision, which enables you to see around 100 million shades of color. The average person with standard color vision sees approximately 1 million different hues, while somebody who is color blind may see as few as 10,000 colors.

    Simply put by artist Concetta Antico, “I see colors in other colors… other people might just see white light, but I see orange and yellow and pink and green… so white is not white; white is all varieties of white”.

    That was about colors.

    What about the vision depth?

    Speaking of depth, the average person has a visual acuity of 20/20. A score of 20/5 means you can see things at 20 feet that most people can’t see until they are standing 5 feet away. This type of visual acuity is akin to an eagle’s vision.

    There have been reports of an Aborigine man who had 20/5 vision. Despite this, researchers believe this level of vision is not possible in humans.

    Bald eagles can have 20/4 or 20/5 vision, meaning they can see four or five times farther than the average person.

    Researchers speculate the reason for this is because of their hunting style. They are adapted to hunting from a stationary position. So their eyes require less movement and they can therefore be more detailed in their vision.

    Nature has, on the other hand, gifted humans with intelligence and a tendency to hack limits. So, how about a 20/1 shot?

    Some of you might think this is way off the mark. Maybe it’s difficult to imagine this new technology, and maybe it’s just an impossible dream. Maybe it could be science fiction. Everyone (scientists included) considers their new technology an impossible dream.

    Let’s go straight to the point now that we know what we are talking about.

    Eye-Lens Software + Hardware

    The Eye-lenses would be just like a smartphone camera that possessed a 100x zooming ability. And the software, the most important part of the whole process, will be able to control the lenses, adjust the image, and snap an image from your eyes. Yes, besides enhanced color perception, zoom-in ability, and snapping photos.

    Once the software is done (it’s a long process yet to be discussed), it will be connected to a processing unit that will process the image, and isolate the parts of the image that are relevant to you. The software would then render this information into a visual experience for you, just like watching TV or reading a book on your phone.

    When you look at something with your eyes, you are shooting photon rays into your brain in the form of electromagnetic waves (light). The brain interprets these light patterns as what you are seeing—either formulating an image or not seeing anything at all.

    The Hardware Part

    As mentioned earlier, we already have smartphone cameras that possess a 100x zooming ability. So, there is no concern about the ability to zoom in.

    Speaking of enhanced color perception, you will need to trick your brain to see colors in depth. Just like the tetrachromacy example we mentioned above, to see white #2837834 instead of seeing just white, your brain comes into play. In this case, the hardware comes into play.

    The hardware would create a 3D image for you and manipulate the light waves in your brain to make you see different colors and shapes from that single white light. But nature won’t make your brain do so naturally, just like you can not stream a 4K video in 4K on a 1080p monitor.

    The “tricking your brain” part may be the biggest challenge in using this technology. Right now, we don’t have a way to do this naturally. Fortunately, there are ways.

    AI-Powered Hardware

    The hardware would be able to enhance your vision and have AI computers not waste your time by showing you irrelevant patterns in a scene.

    It would only show you relevant patterns, such as people, text, or food, not heat maps, images, or anything irrelevant. No fake news here, and no ads either. You will only be presented with information that is relevant to you.

    But how would you know if something is relevant? How could hardware process your images and make a visual interpretation?

    As we said before, your brain would process light patterns from objects around you, and there is no way for the hardware to know what parts of those patterns are important for you to see. Everything else will be dismissed and not shown at all. In other words, it’s like watching the world through a sleeping person’s eyes (images or whatever). What you see would be different than what they see because of an optical illusion when they close their eyes.

    “Super-Human” Vision

    The software, as mentioned earlier, will play the most crucial role in this whole process. Additionally, the software would be able to control the lenses, manipulate the colors, and snap an image from your eyes for you to get a near 100x zooming ability.

    The software would only be able to delete irrelevant patterns and enhance the ones relevant to you instead of making them look like something else or drawing new objects from scratch. That would be way too hard for a mere software program. With the current pace of development, it may take a long time before we have such an AI-powered software program.

    Additional features of the software include its ability to absorb information about things around you. Then, for every pattern it identifies, it will create a corresponding “snapshot” of your visual brain. The software would then take these snapshots and create a 3D scene from them through computer algorithms. And while this is going on, the software would be able to identify patterns. We know this as real-time pattern recognition (RTR).

    After this, the software will have an estimate or idea of what the user’s eyes see in an image. We know the brain for doing the same. The software would act more like a flash drive than an external hard drive for our actual brains. It’s because they would not be connected to our bodies by any means except the actual eye lenses.

    Mojo Vision‘s AR contact lenses were hyped a lot in early 2020, but they never came to fruition.

    Even if not in the near future, you will be able to have a digital copy of your eyes. From this, you will be able to see everything in 3D in less than a couple of decades. You will be able to see things in different colors as we talked about above. And it will be a whole new era of visual perception for all of us.

  • An infant’s brain activities predict future abilities

    An infant’s brain activities predict future abilities

    The long-term process of brain development starts around two weeks after conception and lasts until early adulthood, which is about 20 years later. The first time a baby crawls or walks is a moment that most of us may have underestimated. But these are the development milestones that, as we now know, a baby’s mind is busy getting ready the infant to learn the skills they will need in the future.

    Studies have shown that environment can change infants’ brains during their early years. Researchers studying the hippocampus, an extension of the temporal part of the cerebral cortex, found that the brain-part is involved in learning and memory. They also revealed that as infants get older and reach milestones such as walking or crawling, a part of their hippocampus called CA3 becomes increasingly active during these behaviors.

    As the hippocampus is a crucial part of their brain for learning, memory, and other cognitive tasks, the early childhood brain activities of an infant predict their future abilities. Study findings on this could help us understand and properly deal with our children with any kind of problem due to improper stimulation at their young age.

    developing baby brain

    The normal baby’s brain is about one-fourth the size of the average adult brain at birth. Interestingly, it grows double the size in the first year. By age 3, it reaches about 80% of adult size, and by age 5, it grows up to 90% (almost fully grown). 

    While it may appear that infants are helpless creatures that only blink, eat, cry, and sleep, University of Missouri researchers found in 2012 that “infant brains come equipped with knowledge of ‘intuitive physics.’”

    It means that their primitive brains have a “built-in understanding” of the way objects in the world work.

    In 2016, another study based on longitudinal studies performed a few decades earlier showed that the age a child achieved major milestones like standing or walking was a predictor of later child performance in memory.

    “Our findings are consistent with those of longitudinal studies performed a few decades ago, showing that the age a child achieved major milestones like standing or walking was a predictor of later child performance in memory,” said Akhgar Ghassabian, a postdoctoral fellow at the National Institutes of Health and lead author of the study.

    Those findings lend further evidence to the idea that development in infancy and toddlerhood are critical periods of brain development.

    Researchers from John Hopkins University published their study in the Proceedings of the National Academy of Sciences in 2021, which was the first study to focus on pre-verbal babies. 

    The researchers found that if any magic tricks captivate your toddler and they can’t get enough of illusions and other fascinating antics, this could be a prediction of their future cognitive ability.

    All the above research on infants’ brain activities agree with each other in that the activities of the brains of infants predict their future abilities.

    Newborns have an innate understanding of how things interact in the environment. Their future lives will reveal the regularities and patterns they have formed by their interactions with the world around them.

    Unable to speak or comprehend speech, they rely on what they see. They learn by watching, listening, and discovering.

    Parents who watch their babies closely may find out that their ability to hear or see is somehow impaired later in life or perhaps has no adverse effect at all. This is one of the study findings which shows how important it is for a baby to have constant eye contact with its parents when it’s awake and cared for, especially when it’s asleep as well.

    An infant’s brain activities generally predict their future abilities. And this whole process keeps on happening in a child’s brain from early childhood and not until after a child’s teenage years.

    Remember the kind of brain that kids have? Spending too much time with them can make their brain develop better. Those tiny brains need much more attention from parents or teachers than what parents think.

  • AI is earning trust as much as humans to flag hate speech

    AI is earning trust as much as humans to flag hate speech

    The year 2022 has already witnessed rapid growth in the evolution and use of artificial intelligence (AI). Recent research has shown that people could trust AI as much as human editors to flag hate speech and harmful content.

    According to the researchers at Penn State, when users think about positive attributes of machines, like their accuracy and objectivity, they show more faith in AI. However, if users are reminded about the inability of machines to make subjective decisions, their trust is lower.

    S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory, state that the findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that the material has been censored or inaccurately classified.

    However, the researchers warn that both human and AI editors have advantages and disadvantages. Humans tend to more accurately assess whether the content is harmful, such as when it is racist or potentially self-harming. People, however, are unable to process the large amounts of content that are now being generated and shared online.

    On the other hand, while AI editors can swiftly analyze content, people often distrust these algorithms to make accurate recommendations and fear that the information could be censored.

    Bringing people and AI together in the moderation process, as they say, maybe one way to build a trusted moderation system. She added that transparency — or signaling to users that a machine is involved in moderation — is one approach to improving trust in AI. However, allowing users to offer suggestions to the AIs, which the researchers refer to as “interactive transparency,” seems to boost user trust even more.

    Recommended: Building a model of an Artificial brain

    To study transparency and interactive transparency, among other variables, the researchers recruited 676 participants to interact with a content classification system.

    The research showed, among other things, that whether an AI content moderator invokes positive attributes of machines, like their accuracy and objectivity, or negative attributes, like their inability to make subjective judgments about subtle nuances in human language, will determine whether users would then trust them.

    Giving users the ability to participate in whether or not internet material is harmful may improve user trust. The study participants who submitted their own terms to an AI-selected list of phrases used to categorize posts trusted the AI editor just as much as they trusted a human editor, according to the researchers.

    According to Sundar, relieving humans from the job of content review goes beyond simply providing workers with a break from a tiresome task. He said that by using human editors, these workers would be exposed to hours of violent and hateful content.

    “There’s an ethical need for automated content moderation,” said Sundar, who is also director of Penn State’s Center for Socially Responsible Artificial Intelligence. “There’s a need to protect human content moderators—who are performing a social benefit when they do this—from constant exposure to harmful content day in and day out.”

    Future work could look at how to help people not just trust AI but also understand it. Interactive transparency may be a key part of understanding AI, too, she added, according to the researchers.

    Note: this material has been edited for length and content. For further information, please visit the cited source.

  • Is the Universe Getting “Younger” with Time?

    Is the Universe Getting “Younger” with Time?

    As a challenge to the existing physical laws, the universe is reversing, getting younger with time instead of being further aged. The reverse effects are clearly visible now, even in the general behavior of the observable galaxies.

    The Reverse Effect 

    It clearly exists, unlike the prediction of the red-shift principle that galaxies are moving away from us at an increasing speed. The measured speeds are actually increasing in the direction from us to them. This means that there is an anti-red-shift effect working on a galactic scale and is happening due to an inverse acceleration of these galaxies.

    Another point is that the universe is not expanding outwards as anticipated, but it is contracting inwards because we can deduce it from the observed phenomenon of gravitational lensing, by which galaxies a few hundred million light-years away are getting closer to us and vice versa.

    As conceptualized in the Big Crunch theory, galaxies’ material content is actually converging towards a common point in the past. It is now at an infinite distance in the future, i.e., at the center of gravity, that physics has predicted to exist as dark matter, but has not yet been discovered so far.

    James Webb Images of “Young” Distant Galaxies

    The most convincing argument that the universe is not being “aged”, but instead getting younger and younger with time, is evident in the blue, bright, and fully-shaped several million old galaxies, which the James Webb Space Telescope recently captured, at the edge of the universe.

    As anticipated by the Big Bang theory, galaxies in the timeline had just entered the initial stage of their development: colliding, merging, and trying to gain galactic shapes and sizes. The opposite occurred, as the universe’s size and age are far greater and older than predicted by the Big Bang theory.

    And, beating the laws of physics, the “hundreds of billion-year-old universe” (since the 13.7-billion-year hypothesis is incomplete) has now gradually been returning to its past, being younger with time. The process won’t one day end in its death as that of the general law of physics. But instead, it will again start getting older when it reaches the proximity of its young age.

     Higgs boson and the Age of the Universe

    The Invisible Universe reveals stunning evidence that the universe is reversing and getting younger with time.

    How does this make sense? The discovery of the Higgs boson at CERN with energy of 125.35 GeV per proton should have ruled out the prevailing idea that all particles have mass because they interact with an invisible field, which physicists call the “Higgs field” and which permeates space.

    Particles obtain mass when they interact with this field, and photons are not affected by it. But, the Higgs boson has mass itself, so it interacts with its own field. Thus, the idea that all particles acquire mass by interacting with the Higgs field is, according to this calculation, wrong because they all must have been getting different Higgs masses at random.

    This means that there is another force in nature that most scientists have overlooked.

    The above works on a galactic scale and cannot be found at smaller scales (like atoms or smaller), because our physical laws are not applicable there and we cannot apply them in these cases.

    Colliding Huge Galaxy Clusters and So-called Dark Matter

    Likewise, galaxy clusters, which contain hundreds to thousands of galaxies bound to one another by the force of gravity, can also collide, smashing into one another over the course of millions of years. Hidden in these mind-blowing collisions are clues to how the universe is getting younger, contracting to a smaller, more energetic point. This fact also denies the hypothesis of an expanding universe.

    Another example is the assumption of supernatural elements like so-called dark matter and dark energy. This looks like an attempt to explain the observed anomaly of the universe, that the expansion rate of a contracting universe appears to be accelerating.

    Astronomers have attributed a massive celestial body as the cause of a sufficient curvature of spacetime for the path of light around it to be visibly bent, as if by a lens. The massive celestial body, which is inextricably related to dark matter, is supposed to be the holding force. But the absence of enough evidence for the existence of dark matter needs no more labor in understanding that the “force” can equally be “bright” instead of “dark”.

    The never-ending age-cycle of the universe

    The way forward is towards a hypothesis that can explain the observations better: that we are living in a universe that is reversing its motion and turning younger with time, unlike an infant child growing up to be an adult, but like an age-fluctuating, never-dying thing, which exists in a never-ending age-cycle of ups and downs with time

    The assumption of an expanding universe is just like an insensible phenomenon, which makes us remember the “ghost story” that our grandma used to tell us in our early childhood. By hiding the state of its contraction, trying to display the expanding picture of the universe will take us no farther than a liar’s hut.

    It is only by reversing the way we regard the cosmos, by taking into consideration the two contradictory phenomena of the expanding universe and its observable contraction inwards, that we can finally understand what’s really happening.

    The Necessity of Reviewing the Physical Laws

    With this new understanding and with a reversed view of this all, physical laws will have to be modified in order to make them work for this new perspective

    For example, since the contracting universe is actually not expanding as we think, the law of redshift, which assumes that galaxies are moving away from us at an increasing speed, will have to be modified.

    But, with this new perspective of a reversed contraction towards its childhood, we will see that their apparent acceleration is only outwards relative to our position and not with respect to the center of mass of everything in the universe as they are “receding” instead. 

    To sum up, what we have learned about the origin, size, and features of the universe is just a matter of within a century, a really ignorable timeframe on the cosmic scale. Although human intelligence has made significant advances, it is critical that we continue to seek the truth and correct outdated hypotheses as soon as possible. Let’s wipe our eyes and move them around the universe to enjoy its view of getting younger with time.

  • Do we now need regulations on Open-Source AI?

    Do we now need regulations on Open-Source AI?

    Key Points:

    • Unregulated open-source software is going to have a significant impact on current political, economic, and social systems.
    • The European Union has drafted new rules aimed at regulating AI, which could eventually prevent developers from releasing open-source models on their own.
    • The proposed EU AI Act requires open-source developers to make sure their AI software is accurate, secure, and transparent about risk and data use.
    • Every individual, company, organization, and nation needs a solid understanding of exactly how regulations Act on open-source AI software is needed.

    None of the public activities can be taking place unseen in a civilized human world. Each and every activity taking place in public needs to remain under certain legal as well as regulatory frameworks, and Artificial Intelligence is also not an exception.

    So, it’s now considered necessary to bring AI under regulations, which may encourage the further development of AI, and meanwhile, manage associated risks with open-source AI software technology such as publicly available datasets, prebuilt algorithms, and ready-to-use interfaces for commercial and non-commercial use under various open-source licenses.

    Why does AI, open-source software needs regulations?

    Open-source software is developed in a decentralized and collaborative way and relies on peer review and community production. Accessible publicly, anyone can see, modify, and distribute the code as they see fit.

    Each aspect of human behavior can appropriately run only under certain norms and regulations. For example, the use of cars must be regulated by law, whether it is used on an individual or commercial basis. Similarly, AI technology which is shaping the human world cannot be managed in an unsupervised way.

    But, not for the first time have there been calls for open-source regulation. The software vulnerability known as Log4Shell, discovered in late 2021, focused the minds of enterprises and Governments on how best to manage open-source software. This was followed by calls for government intervention.

    In May 2021, the US had already called out the need for a Bill of Materials through the Ordinance on Security of Software. The Bill of Materials approach sets out the code incorporated when open source is used.

    It’s obvious that, like any powerful force, AI also requires rules and regulations for its development and use to prevent unnecessary harm through open-source vulnerabilities, basically security risks in open-source software. Weak or vulnerable code of open-source software allows attackers to conduct malicious attacks or perform unintended actions that are not authorized, sometimes, leading to cyberattacks like denial of service (DoS).

    Besides the security risks, using open-source software may also have intellectual property issues, lack of warranty, operational insufficiencies, and poor developer practices.

    Perhaps, considering the same risks, the European Union has now attempted to introduce new rules, aiming at regulating AI, which could eventually prevent developers from releasing open-source models on their own.

    EU draft to regulate open-source AI?

    According to Brookings, the proposed EU AI Act, which has not yet been passed into law, requires open-source developers to make sure their AI software is accurate, secure, and transparent about risk and data use in technical documentation.

    It argues that a private company would likely try to blame the open-source developers and sue them if it deployed the public model or used it in a product and ended itself in difficulties due to some unexpected or uncontrollable outcomes from the model.

    Unfortunately, it would mean that private companies would be in process of developing AI, which would make the open-source community reconsider sharing their code.

    Oren Etzioni, the outgoing CEO of the Allen Institute of AI, reckons open-source developers should not be subject to the same stringent rules as software engineers at private companies.

    “Open-source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided “as is” – consider the case of a single student developing an AI capability; they cannot afford to comply with EU regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on the reproducibility of scientific results,” he told TechCrunch.

    Most recent AI-related events

    The results for the annual MLPerf inference test, which benchmarks the performance of AI chips from different vendors across numerous tasks in various configurations, have been published this week.

    Although an increasing number of vendors are taking part in the MLPerf challenge, regulatory concerns apparently appear to be holding back their participation in the test.

    “We only managed to get one vendor, Calxeda, to agree to participate. The rest either declined, rejected the challenge altogether, or thought it might raise privacy concerns,” said Chris Williams, a research associate at Berkeley’s Computer Science Department.

    The MLPerf challenge tests AI chips on various tasks at scale using fully instrumented Mark 1.0 hardware and software. The chips run different models and have no knowledge of whether their results were provided by an open-source model or a proprietary one. But vendors who do not agree to participate in the test won’t be able to display their results publicly on ShopTalk forums like this one.

    Many netizens have found joy and despair in experimenting with these systems to generate images by typing in text prompts. There are sorts of hacks to adjust the model’s outputs; one of them, known as a “negative prompt, allows users to find the opposite image to the one described in the prompt.

    For instance, a digital artist’s famous Twitter thread demonstrates how strange text-to-image models may be beneath the surface.

    According to Supercomposite, negative prompts frequently include random photos of AI-generated people. The bizarre behavior is simply another illustration of the bizarre properties these models may possess, which researchers are only now starting to explore.

    Recommended: Human Future with Sexist, Racist, and Brilliance Biased AIs

    The former engineer Blake Lemoine claimed last week that he thought Google’s LaMDA chatbot was conscious and might have a soul in another event. Sundar Pichai, the CEO of Google, countered the claims by saying, “We are far from it, and we may never get there” but it is undeniable that AI development has progressed more than what we can currently see on the surface.

    CEO Pichai himself immediately admitted, “… I think it is the best assistant out there for conversational AI – you still see how broken it is in certain cases”.

    Why do AI regulations Act on open-source software right?

    Only nature can function without regulatory acts – not humans in public. While the increasing trend of open-source software has already been visible as a threat, not only the EU, but every nation also needs to systematize the design, production, distribution, use, and development of all kinds of software.

    AI regulations Act on open-source software is right also because unregulated open-source software is going to have a significant impact on current political, economic, and social systems. With the growing use of open-source AI software, the risks of unintended effects, like massive cyberattacks, individual as well as public data breaches, and misuse of software for malicious purposes like working with or supporting terrorism, etc. can be inevitable.

    It’s because cyber and other types of criminals may be looking for flaws in a product to exploit. And if they succeed by cracking open-source AI models, for example, protecting your company’s sensitive data, there could be severe consequences from the loss of reputation and property to a question mark on the social, professional – and in the long-run – the national security as well.

    Every individual, company, organization, and nation, therefore, needs a solid understanding of exactly how regulations Act on open-source AI software is a need of this time and an essence of the upcoming future, most of which is likely to be dominated and controlled by technological advances.