Category: Latest

  • COVID is Slowly Destroying Our Brains – Research

    COVID is Slowly Destroying Our Brains – Research

    According to a study led by Dr. Ziyad l-Aly, a clinical epidemiologist at the Washington School of Medicine in St. Louis, seven out of 100 people with COVID show signs of serious brain problems that may last a lifetime.

    The study uses U.S. Department of Veterans Affairs medical records to record the brain health of people who tested positive for COVID.

    COVID was never thought to be only mild disease. But researchers are now reporting that it can cause neurological conditions that are incurable and may last a lifetime.

    The study by Dr. Ziyad Al-Aly and colleagues at the Washington School of Medicine in St. Louis uses U.S. Department of Veterans Affairs medical records to track the brain health of 154,000 people who tested positive for COVID between March 2020 and January 2021 (most were white males). During the same period, two other control groups of patients, with some 10 million people, one control group did not have COVID and another group predated the pandemic.

    Compared to the uninfected control groups, seven percent of the 154,000 patients survived a COVID infection and reported a diverse range of neurological conditions. This means more than six million Americans currently may suffer from some form of brain impairment from COVID, according to Dr. Al-Aly’s estimate.

    People with COVID have:

    a 77% greater chance of suffering memory loss

    50 percent greater chance of having an ischemic stroke

    80% increased chance of experiencing seizures

    a 30% larger probability of experiencing eye issues

    a 42% greater chance of getting tremors and twitches similar to Parkinson’s disease


    Detailed Research

  • Artificial intelligence reduces a 100,000-equation quantum physics problem to only four equations

    Artificial intelligence reduces a 100,000-equation quantum physics problem to only four equations

    Artificial intelligence reduces a 100,000-equation quantum physics problem to only four equations.

    The new approach makes it simpler to find “hidden patterns”.

    Physicists must deal with all of the electrons at once rather than one at a time.

    A new AI program created by Columbia University researchers has found its own strange version of physics.

    The program’s output captured the Hubbard model’s physics even with just four equations.

    It took weeks for the machine learning algorithm to train, which required a lot of computational power.


    AI has now become advanced enough to handle real-world problems. And it is frequently more effective than humans at doing so, as demonstrated by Alexa, Tesla’s self-driving cars, OpenAI’s GPT3 defeating a human philosopher, and DeepMind’s AlphaZero defeating human chess grandmasters. Adding to this, scientists now claim that by using a method known as neural networks to reduce the mathematical representation used to describe a quantum system, they can learn a lot more about the system.

    The new approach also makes it simpler to find hidden patterns, as opposed to just explaining physics and forecasting outcomes for other scientists to find, claim the scientists from Flatiron’s Center for Computational Quantum Physics (CCQ).

    Physicists used artificial intelligence to simplify a difficult quantum problem that previously required 100,000 equations into a simple work that only requires four equations — all while maintaining accuracy.

    The research may change how scientists examine systems with plenty of interacting electrons. The method may also help in the development of materials with desirable qualities like superconductivity or use in the production of renewable energy if it is transferrable to other issues. Additionally, it may inspire the development of novel technological applications that call for precise models of electrons in many-particle systems, such as quantum computing hardware and software.

    “We start with this huge object of all these coupled-together differential equations; then we’re using machine learning to turn it into something so small you can count it on your fingers,” says study lead author Domenico Di Sante, a visiting research fellow at the Flatiron Institute’s Center for Computational Quantum Physics in New York City and an assistant professor at the University of Bologna in Italy.

    The significant challenge relates to the electron movement on a lattice that resembles a grid. Interaction occurs when two electrons are present at the same lattice location. Scientists can study how electron behavior leads to desired phases of matter, such as superconductivity, in which electrons flow through a material without resistance, using this configuration, known as the Hubbard model, which idealizes several significant classes of materials. In addition, scientists test the new methods on the model before they use them on quantum systems with greater complexity.

    The Hubbard model is quite simple, though. Modern computing techniques and even a small number of electrons are not enough to solve the problem. Because the fates of electrons can become quantum mechanically entangled as they interact, physicists must deal with all of the electrons at once rather than one at a time. This is true even when the electrons are widely separated on distinct lattice sites. The computational difficulty becomes exponentially more complex when there are more entanglements as a result of more electrons.

    One method for studying a quantum system is a renormalization group. Renormalization groups (RGs) are formal tools, which scientists use in theoretical physics to systematically assess how a physical system changes when seen from different scales. As the energy scale at which physical processes occur changes, it reflects changes in the underlying force laws (codified in a quantum field theory), with energy/momentum and resolution distance scales essentially combining under the uncertainty principle.

    Unfortunately, there may be tens of thousands, hundreds of thousands, or even millions of individual equations that we must solve for a renormalization group that analyzes all potential electron couplings and makes no concessions. The equations are also challenging to comprehend because they each show the interaction of two electrons.

    AI simplifies quantum formula

    Di Sante and his colleagues questioned whether they could utilize the neural network, a machine learning technology, to simplify the renormalization group. The neural network is like a cross between a frantic switchboard operator and survival-of-the-fittest evolution. First, the machine learning program creates connections within the full-size renormalization group. The neural network then tweaks the strengths of those connections until it finds a small set of equations that generates the same solution as the original, jumbo-sized renormalization group. The program’s output captured the Hubbard model’s physics even with just four equations.

    “It’s essentially a machine that has the power to discover hidden patterns,” Di Sante says. “When we saw the result, we said, ‘Wow, this is more than what we expected.’” “We were really able to capture the relevant physics.”

    It took weeks for the machine learning algorithm to train, which required a lot of computational power. The good news, according to Di Sante, is that they can modify their curriculum to address other issues without having to start from scratch now that it has been coached. To gain extra insights that could otherwise be challenging for physicists to comprehend, he and his partners are also looking into what machine learning is “learning” about the system.

    The main unanswered question is how well the new technique applies to more complicated quantum systems, such as materials with long-range electron interactions. In addition, there are exciting possibilities for using the technique in other fields that deal with renormalization groups, Di Sante says, such as cosmology and neuroscience.

    There seems to be a connection between physics and AI. AI recently changed how we perceive physics (see video link above). Yes, you read that right. A new AI program created by Columbia University researchers has found its own strange version of physics.

    The AI developed different variables to explain what it saw rather than rediscovering the ones we presently use after being shown films of earthly physical processes.

    The progress of our understanding of the universe, quantum physics, and everything else depends on AI experiments like the ones we are currently witnessing. This is so that AI can better understand the most recent statistics by copying and analyzing them. The ultimate goal is to help us understand ourselves better.

    As AI advances, there will be plenty more to come. AI has already started, with ‘Physics’ and ‘Art’.

  • A study suggests watching TV with children may benefit their brain development

    A study suggests watching TV with children may benefit their brain development

    Key Points:

    • A study suggests watching TV with children may benefit their brain development.
    • Researchers analyzed 478 studies that were published in the last 20 years.
    • Early exposure to television may be detrimental to play and language development.
    • Screen content that is age-appropriate for children is more likely to have a positive impact.
    • The study warns that watching TV shouldn’t take the place of other learning activities, like socializing, even though the right kind of information can be more beneficial than harmful.
    • The authors advise reinforcing learning-promoting contexts, like kids watching age-appropriate content under adult supervision and avoiding having a background TV or device on.

    A new research examined the impact of passive screen use on a young child’s cognitive development, revealing that, depending on the situation, exposure to screens – whether from a TV or a mobile device – might be beneficial.

    Full Story

    The researchers from the University of Portsmouth and Paris Nanterre University in France said that the team analyzed 478 studies that were published in the last 20 years. The research showed that, particularly for young infants, early exposure to television may be detrimental to play, language development, and executive functioning.

    Dr. Eszter Somogyi of the University of Portsmouth’s Department of Psychology said, “We’re used to hearing that screen exposure is bad for a child and can do serious damage to their development if it’s not limited to, say, less than an hour a day. While it can be harmful, our study suggests the focus should be on the quality or context of what a child is watching, not the quantity.

    A child may find it difficult to extract or generalize information because of a weak narrative, quick editing, and complex inputs. However, screen content that is age-appropriate for children is more likely to have a positive impact, especially if it’s made to promote interaction.

    According to studies, a parent or other adult should be present for a child to watch TV so they may engage with them and ask them questions.

     “Families differ a lot in their attitudes toward and use of media,” explained Dr. Somogyi.

    Dr. Somogyi also stated that the strength and character of TV’s influence on children’s cognitive development are strongly affected by these changes in viewing context. “Your child’s comprehension of the content can be strengthened by watching television with them and discussing what they see, adding to the skills they learn from educational programs,” Dr. Somogyi added.

    According to the researchers, coviewing can also contribute to the development of their conversation skills and provide children with a role model for appropriate television viewing behavior.

    Warning

    The study warns that watching TV shouldn’t take the place of other learning activities, like socializing, even though the right kind of information can be more beneficial than harmful. Instead, it is crucial to educate parents and other adults who care for children under the age of 3 about the dangers of excessive screen use in unsuitable situations.

    The authors advise reinforcing learning-promoting contexts, like kids watching age-appropriate content under adult supervision and avoiding having a background TV or device on.

    “The important take home message here is that caregivers should keep in mind new technologies.” ” Television or smartphones should be used as potential tools to complement some social interactions with their young children but not to replace them,” said Dr. Bahia Guellai, from the Department of Psychology at Paris Nanterre University.

    Future

    As stated by the researchers, the most important challenge of our society for future generations is to make adults and young people aware of the risk of unconsidered or inappropriate use of screens. This will help in preventing situations in which screens are used as the new type of child-minding, as they have been during the pandemic lockdowns in different countries.

    “I am optimistic with the concept of finding an equilibrium between the rapid spread of new technological tools and the preservation of the beautiful nature of human relationships,” concluded Dr. Guellai.

    The study report has been published in the journal Frontiers in Psychology.

  • An infant’s brain activities predict future abilities

    An infant’s brain activities predict future abilities

    The long-term process of brain development starts around two weeks after conception and lasts until early adulthood, which is about 20 years later. The first time a baby crawls or walks is a moment that most of us may have underestimated. But these are the development milestones that, as we now know, a baby’s mind is busy getting ready the infant to learn the skills they will need in the future.

    Studies have shown that environment can change infants’ brains during their early years. Researchers studying the hippocampus, an extension of the temporal part of the cerebral cortex, found that the brain-part is involved in learning and memory. They also revealed that as infants get older and reach milestones such as walking or crawling, a part of their hippocampus called CA3 becomes increasingly active during these behaviors.

    As the hippocampus is a crucial part of their brain for learning, memory, and other cognitive tasks, the early childhood brain activities of an infant predict their future abilities. Study findings on this could help us understand and properly deal with our children with any kind of problem due to improper stimulation at their young age.

    developing baby brain

    The normal baby’s brain is about one-fourth the size of the average adult brain at birth. Interestingly, it grows double the size in the first year. By age 3, it reaches about 80% of adult size, and by age 5, it grows up to 90% (almost fully grown). 

    While it may appear that infants are helpless creatures that only blink, eat, cry, and sleep, University of Missouri researchers found in 2012 that “infant brains come equipped with knowledge of ‘intuitive physics.’”

    It means that their primitive brains have a “built-in understanding” of the way objects in the world work.

    In 2016, another study based on longitudinal studies performed a few decades earlier showed that the age a child achieved major milestones like standing or walking was a predictor of later child performance in memory.

    “Our findings are consistent with those of longitudinal studies performed a few decades ago, showing that the age a child achieved major milestones like standing or walking was a predictor of later child performance in memory,” said Akhgar Ghassabian, a postdoctoral fellow at the National Institutes of Health and lead author of the study.

    Those findings lend further evidence to the idea that development in infancy and toddlerhood are critical periods of brain development.

    Researchers from John Hopkins University published their study in the Proceedings of the National Academy of Sciences in 2021, which was the first study to focus on pre-verbal babies. 

    The researchers found that if any magic tricks captivate your toddler and they can’t get enough of illusions and other fascinating antics, this could be a prediction of their future cognitive ability.

    All the above research on infants’ brain activities agree with each other in that the activities of the brains of infants predict their future abilities.

    Newborns have an innate understanding of how things interact in the environment. Their future lives will reveal the regularities and patterns they have formed by their interactions with the world around them.

    Unable to speak or comprehend speech, they rely on what they see. They learn by watching, listening, and discovering.

    Parents who watch their babies closely may find out that their ability to hear or see is somehow impaired later in life or perhaps has no adverse effect at all. This is one of the study findings which shows how important it is for a baby to have constant eye contact with its parents when it’s awake and cared for, especially when it’s asleep as well.

    An infant’s brain activities generally predict their future abilities. And this whole process keeps on happening in a child’s brain from early childhood and not until after a child’s teenage years.

    Remember the kind of brain that kids have? Spending too much time with them can make their brain develop better. Those tiny brains need much more attention from parents or teachers than what parents think.

  • Artificial Intelligence Predicts Odors Like Humans

    Artificial Intelligence Predicts Odors Like Humans

    Humans detect smells by inhaling air that contains odor molecules, which then bind to receptors inside the nose, relaying messages to the brain, whereas, AI interprets the signatures and classifies them based on a database of previously collected smells.

    Unlike the human eye, which has only three sensory receptors for sensing the photons of red, green, and blue color, the human nose has over 300 receptors, making it more difficult to predict smell than color.

    A 2014 study showed that humans can distinguish at least 1 trillion different odors. But you may wonder how an artificial intelligence will be able to handle the impossible. These are just initial steps in AI systems, which are being converted more into human-like social individuals all over the world in different areas.

    In the recent times, Google has built an AI model with a human-like capacity to predict odors. Scientists have successfully formulated a “Principal Odor Map” (POM) with the properties of a sensory map. The map developed by the team of Google AI links molecular structure to the aroma of substances and can even predict smells that are still unnoticeable by humans.

    The Google researchers began the research in 2019 using a deep learning algorithm. The type of smell interacted with the molecular structure. Various samples of specific molecules were trained to be identified by a graph neural network (GNN) model along with the smell labels they evoke, such as beefy, floral, or minty.

    Researchers looked into whether the GNN model could learn to predict the odors of new chemicals that people had never smelled before and that were different from the molecules used to train it. The researchers referred to the study as an “important test” in their Google post. Many models work well with data that resembles data they have previously seen, but often fail when tested with new data.

    The Google model was successful and demonstrated exceptional intelligence in predicting smell from molecule structure.

    The model was also tested to see whether it could predict how animals would perceive odors. They found that the map was capable of precisely predicting the activity of sensory receptors, neurons, and behavior in most of the animals that olfactory neuroscientists have studied, including mice and insects.

    AI detecting odors can be used for a variety of tasks, including identifying scents in the environment to aid people who have lost their sense of smell; and creating new artificial scents.

    The research team at Google found that the common application of the sense of smell may be to detect and distinguish between various metabolic states, such as knowing when something is ripe vs rotten, nutritious vs inert, or healthy vs sick.

    They had gathered data about metabolic reactions in dozens of species across the kingdoms of life and found that the map corresponded closely to metabolism itself.

    The scientists retrained the model to tackle the issue of the spread of diseases transmitted by mosquitoes and ticks while killing hundreds of thousands of people each year.

    The team improved the original model with two new sources of data. The first set was a long-forgotten set of experiments conducted by the USDA on human volunteers beginning 80 years ago and recently made discoverable by Google Books. Secondly, a new dataset was collected by their partners at TOPIQ, using their high-throughput laboratory mosquito assay.


    With the help of the POM, researchers hope to predict animal olfaction to better respond to the deadly diseases transmitted by mosquitoes and ticks. Both datasets measure how well a given molecule keeps the mosquitos away. Together, the resulting model can predict the mosquito repellence of nearly any molecule, enabling a virtual screen over huge swaths of molecular space.

    “Less expensive, longer lasting, and safer repellents can reduce the worldwide incidence of diseases like malaria, potentially saving countless lives,” said the researchers.

    According to their findings, a Principal Odor Map might be produced using the researchers’ method of smell prediction in order to tackle odor-related issues more widely. The key to measuring smell was in the map. It provided answers to a variety of questions regarding new odors and the molecules responsible for them. The model also linked the evolution of odors to the natural world.

    To sum up, AI algorithms do have a potential to effectively predict smell. The Google model was one of the first to demonstrate this.

  • Japan’s cyborg cockroach getting ready to assist in disaster relief efforts

    Japan’s cyborg cockroach getting ready to assist in disaster relief efforts

    As Japanese researchers are programming them to assist in disaster relief efforts and research in catastrophe-hit areas, swarms of cyborg cockroaches will be the first to identify trapped survivors in any catastrophe, such as an earthquake.

    Hope for immediate action to save and rescue victims has been aroused by the researchers’ recent presentation of their capability to mount “backpacks” of solar cells and electronics on the bugs and control their motion with a remote.

    The flexible solar cell film was developed by Kenjiro Fukuda and his team at the Thin-Film Device Laboratory at the Japanese research giant Riken. It is 4 microns thick, or roughly 1/25 the width of a human hair, and can fit on the insect’s abdomen.

    The film allows the roach to move around without limitation, and the solar cell generates enough power to process and convey directional information to the sensory organs on the bug’s hindquarters.

    The study expanded on previous insect-control studies conducted at Nanyang Technological University in Singapore, and it may one day produce cyborg insects that can enter danger zones far more rapidly than robots.

    “The batteries inside small robots run out quickly, so the time for exploration becomes shorter,” Fukuda said. “A key benefit (of a cyborg insect) is that when it comes to an insect’s movements, the insect is causing itself to move, so the electricity required is nowhere near as much.”

    The research team chose Madagascar hissing cockroaches for the research because they are sufficiently large to carry the necessary equipment and lack wings that would hinder the experiment. The bugs can maneuver around minor obstacles or right themselves when they are flipped over, even with the backpack and film attached to their backs.

    There is still much to learn about this topic. In a recent demonstration, Riken researcher Yujiro Kakei told a cyborg roach to turn left, causing it to scramble in that general direction using a specialized computer and wireless Bluetooth signal. But the bug turned in circles when it obtained the “right” signal.

    The next task is to miniaturize the parts so that the insects can move more easily and so that sensors and even cameras may be mounted. Kakei stated that he invested 5,000 yen ($1 = 143.3100 yen) on parts from Tokyo’s famous Akihabara electronics district to construct the cyborg backpack.

    The roaches can be removed from the lab’s terrarium by removing the backpack and film. The insects can live up to five years in captivity and reach maturity in just four months.

    Fukuda sees a wide range of applications for the solar cell film, which is made up of small layers of plastic, silver, and gold, beyond just disaster rescue bugs. The film could be integrated into skin patches or clothing to track vital signs.

    He said that on a sunny day, a parasol covered with the material might produce enough electricity to recharge a phone.

  • Researchers use artificial intelligence to uncover the cellular origins of Alzheimer’s disease and other cognitive disorders

    Researchers use artificial intelligence to uncover the cellular origins of Alzheimer’s disease and other cognitive disorders

    Researchers have used artificial intelligence techniques to examine structural and cellular features of human brain tissues to help determine the causes of Alzheimer’s disease and other related disorders.

    Instead of using traditional markers like amyloid plaques, the Mount Sinai research team found that studying the causes of cognitive impairment using an unbiased AI-based technique revealed unexpected microscopic abnormalities that might predict the presence of cognitive impairment. On September 20, these findings were published in the journal Acta Neuropathologica Communications.

    Stating that AI represents an entirely new paradigm for studying dementia and will have a transformative effect on research into complex brain diseases, especially Alzheimer’s disease, co-corresponding author John Crary, MD, PhD, Professor of Pathology, Molecular and Cell-Based Medicine, Neuroscience, and Artificial Intelligence and Human Health, at the Icahn School of Medicine at Mount Sinai, said, “The deep learning approach was applied to the prediction of cognitive impairment, a challenging problem for which no current human-performed histopathologic diagnostic tool exists.”

    The medial temporal lobe and frontal cortex were the two brain regions whose underlying architecture and cellular features the research team identified and analyzed. The researchers used a weakly supervised deep learning algorithm to assess slide images of human brain autopsy tissues from a group of more than 700 elderly donors in order to predict the presence or absence of cognitive impairment in an attempt to improve the standard for postmortem brain assessment in order to identify signs of diseases.

    Related Readings:

    In a supervised learning environment, the weakly supervised deep learning approach can handle noisy, limited, or imprecise sources to provide signals for labeling substantial amounts of training data. The amount of myelin, the protective layer around brain nerves, is measured using the Luxol rapid blue staining, and this deep learning model was used to identify a decrease in the quantity of myelin.

    The white matter, which is involved in learning and brain functions, showed a concentration of myelin staining that was decreasing in amount, dispersed in a non-uniform manner across the tissue, and associated with cognitive impairment. The accuracy of the two sets of models that the researchers developed and used to predict the presence of cognitive impairment was better than that of guessing.

    According to their findings, the diminished staining intensity in certain brain regions identified by AI may provide a scaleable platform to assess the presence of brain impairment in other related disorders.

    The methodology establishes the framework for further research, which may involve using artificial intelligence models on a bigger scale and further dissecting the algorithms to improve their reliability and accuracy. The group said that the ultimate aim of this neuropathologic research program is to create better therapeutic and diagnostic methods for patients with Alzheimer’s disease and associated disorders.

    “Leveraging AI allows us to look at exponentially more disease relevant features, a powerful approach when applied to a complex system like the human brain,” said co-corresponding author Kurt W. Farrell, PhD, Assistant Professor of Pathology, Molecular and Cell-Based Medicine, Neuroscience, and Artificial Intelligence and Human Health, at Icahn Mount Sinai.

    Read: AI is earning trust as much as humans to flag hate speech

    “It is critical to perform further interpretability research in the areas of neuropathology and artificial intelligence, so that advances in deep learning can be translated to improve diagnostic and treatment approaches for Alzheimer’s disease and related disorders in a safe and effective manner,” added W. Farrell.

    “Interpretation analysis was able to identify some, but not all, of the signals that the artificial intelligence models used to make predictions about cognitive impairment. As a result, additional challenges remain for deploying and interpreting these powerful deep learning models in the neuropathology domain,” said the study’s lead author, Andrew McKenzie, MD, PhD, Co-Chief Resident for Research in the Department of Psychiatry at Icahn Mount Sinai. As a result, there are still more difficulties in applying and understanding these potent deep learning models in the field of neuropathology.

    This new study provides an important step forward in the development of AI models that can be used to predict cognition clinically from tissue slides. Future work will involve continuing AI and deep learning work on more standard histopathologic features as well as other brain regions relevant to cognition.

    Also involved in this study were researchers from the University of Texas Health Science Center in San Antonio, Texas; Newcastle University in Tyne, England; Boston University School of Medicine in Boston; and UT Southwestern Medical Center in Dallas.

    One of the largest academic medical systems in the New York metropolitan area is Mount Sinai Health System, which employs over 43,000 people across eight hospitals, over 400 outpatient services, around 300 labs, a school of nursing, a leading medical school, and graduate education.

    For more information, visit the cited source.

  • AI is earning trust as much as humans to flag hate speech

    AI is earning trust as much as humans to flag hate speech

    The year 2022 has already witnessed rapid growth in the evolution and use of artificial intelligence (AI). Recent research has shown that people could trust AI as much as human editors to flag hate speech and harmful content.

    According to the researchers at Penn State, when users think about positive attributes of machines, like their accuracy and objectivity, they show more faith in AI. However, if users are reminded about the inability of machines to make subjective decisions, their trust is lower.

    S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory, state that the findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that the material has been censored or inaccurately classified.

    However, the researchers warn that both human and AI editors have advantages and disadvantages. Humans tend to more accurately assess whether the content is harmful, such as when it is racist or potentially self-harming. People, however, are unable to process the large amounts of content that are now being generated and shared online.

    On the other hand, while AI editors can swiftly analyze content, people often distrust these algorithms to make accurate recommendations and fear that the information could be censored.

    Bringing people and AI together in the moderation process, as they say, maybe one way to build a trusted moderation system. She added that transparency — or signaling to users that a machine is involved in moderation — is one approach to improving trust in AI. However, allowing users to offer suggestions to the AIs, which the researchers refer to as “interactive transparency,” seems to boost user trust even more.

    Recommended: Building a model of an Artificial brain

    To study transparency and interactive transparency, among other variables, the researchers recruited 676 participants to interact with a content classification system.

    The research showed, among other things, that whether an AI content moderator invokes positive attributes of machines, like their accuracy and objectivity, or negative attributes, like their inability to make subjective judgments about subtle nuances in human language, will determine whether users would then trust them.

    Giving users the ability to participate in whether or not internet material is harmful may improve user trust. The study participants who submitted their own terms to an AI-selected list of phrases used to categorize posts trusted the AI editor just as much as they trusted a human editor, according to the researchers.

    According to Sundar, relieving humans from the job of content review goes beyond simply providing workers with a break from a tiresome task. He said that by using human editors, these workers would be exposed to hours of violent and hateful content.

    “There’s an ethical need for automated content moderation,” said Sundar, who is also director of Penn State’s Center for Socially Responsible Artificial Intelligence. “There’s a need to protect human content moderators—who are performing a social benefit when they do this—from constant exposure to harmful content day in and day out.”

    Future work could look at how to help people not just trust AI but also understand it. Interactive transparency may be a key part of understanding AI, too, she added, according to the researchers.

    Note: this material has been edited for length and content. For further information, please visit the cited source.

  • Intent HQ Launches AI-Guided Campaign Audience Builder

    Intent HQ, the provider of a customer AI analytics platform, has launched the first AI-guided dynamic audience builder for telco marketers.

    With the use of audience AI, telecoms may considerably boost the effectiveness of their marketing campaigns and discover new, untapped prospects within their customer base.

    Without the assistance of data scientists or business intelligence (BI), it uses machine learning to identify target consumers based on behavioral similarity indicators.

    Because they are still unable to identify audiences that are sufficiently relevant, Intent HQ feels that telco marketers are losing out on numerous new opportunities to generate income.

    Due to the lack of tools available, creating campaigns takes too long, and current methods don’t successfully identify the target markets for each marketing message.

    “Our goal is to create marketing that is so relevant to our customers that they view our messages as helpful suggestions as if we were friends.” Audience AI is helping us do that by giving our marketers fingertip access to human-level insights and making them truly actionable, “said Andy Herz, Director, Value-Based Marketing at Verizon.

    Lead times like these and poor audience choice result in lost opportunities, more opt-outs, and customer brand dissatisfaction. AI for customers was created to address these issues.

    Audience AI significantly increases the Verizon Protect campaign’s revenue. The Verizon consumer marketing team employed audience AI to more effectively target and broaden the audience for Verizon Protect, the company’s top device insurance program.

    Protect is periodically made available to customers who did not want to add it at the time they purchased their device during time-limited “open enrollment” programs.

    The report says Audience AI achieved outstanding results for the campaign compared to the current audience selection strategy.

    Audience AI offers the following crucial elements as a self-serve audience-building tool to improve campaign relevancy and ROI:

    1) Simple. Audience AI features a user interface that is simple to use. Without professional assistance, any member of the marketing team is able to produce viable campaign audiences.

    2) Responsive. To find their ideal customers, marketers can instantly query a variety of predictive indicators.

    Building an audience has been a tough and time-consuming endeavor that depends on teams of data analysts and human intuition. By removing the guesswork involved in audience creation, audience AI makes the process much simpler and saves campaign managers several hours of labor.

    3) Safe. By using the Intent HQ SafeSignal engine to deliver privacy-safe audience data, Audience AI may deliver audience data without the requirement for significant legal permission.

    Audience AI is a machine learning technique that develops an audience “seed” based on behavioral and/or event-based inputs. With a statistically significant scale, Intent HQ’s proprietary data science algorithms produce a practical and appropriate campaign audience. 
    What does the industry think about Audience AI?

    According to Patrick Fagan, Head of Behavioural Science at Kubik Intelligence, Audience AI is the only audience creation solution that harnesses the power of machine learning to analyze weblogs and other behavioral data. Fagan also adds that this allows telco marketers to build targets of behaviorally similar customers that would not otherwise be easily identifiable. “Most importantly,” as Fagan said, “it has full consumer privacy baked into the design”.

    This is effective because it identifies the data that the artificial intelligence engine gathers for a specific user based on behavioral similarities, not on an anonymous device ID. This means that it can be used in customer acquisition campaigns without exposing the actual identities of users or even businesses, in most cases.

    In addition, the use of Intent HQ’s Smart Signals technology allows marketers to learn and improve their audience selection capabilities while keeping customer data safe and private.

    Another advantage is that advertisers are able to analyze their customer base with more precision since Audience AI uses predictive signals, such as weblogs, purchase behavior, and other events from customers’ smartphone and web surfing habits.

  • Can you use AI to control human free will?

    Creating free will in AI would be awesome – “exaggerating” would it be if we could program AI to control human free will in the first place.

    When it comes to speculations, the terms “free will” and “artificial intelligence”, if reacted, often create an exploding war of ideas. There’s enough space for thousands and thousands of arguments to emerge from both sides.

    Many of them will be incredibly accurate and well-measured, though extremely speculative. There will still be no definite answer to what free will is, or whether AI can ever have it.

    Free Will

    To make things more complicated, free will is a very subjective thing and there’s no way of proving that you have it over another person who either says ‘they do’ or says “they don’t”.

    Of course, if one could program a computer to be able to control human free will, that would be the ultimate power grasp and a giant leap for science and AI. If you could control someone else’s actions perfectly whilst also controlling their thoughts as you wanted, then you would be able to basically do whatever you want with that person at your side.

    Well, one can wonder for a long time about this speculation, but there is another question that somehow seems more relevant to me.

    Let’s say, we are able to control the free will of a human being by programming AI to do it. What would be the purpose of controlling free will? And how exactly would you go about doing it?

    To answer these questions, I’ll describe from my point of view what would be the possible scenarios in which controlling someone’s free will could make sense:

    • To make them do something they wouldn’t normally do.
    • To learn new things imperceivable by our senses.
    • To bring the extraordinary into the normal world.
    • To perceive the same world in a different way.
    • To be perceivable by a machine or a robot.
    • To perceive the world as a machine rather than a human (maybe for fun).
    • To help understand what we are in this world and our place in it.

    These were the potential purposes of controlling free will with AI. But the questions now are “if”, and “how”.

    Can we program AI to control human free will?

    Now, let’s say we don’t want to program AI to directly control free will – but rather to understand it. I can say that there is a very big chance that at least a few of the purposes listed above will be achieved in the next 20 or so years, simply because of rapidly increasing research works in the field. For instance, Meta AI recently proved that it can tell which words you hear by reading your brainwaves

    But remember, many experts in this field believe otherwise and believe that it will never be possible as long as we are thinking with our “natural” human brain.

    Let’s get to the more important thing for us — “how?”.

    How would we possibly go about programming AI to understand human free will? To me, it seems quite simple — we would have to make a machine that would understand the fundamental human mind, and most importantly, learn the thought patterns and reasoning of a human by observing us.

    There are many ways, like using imaging devices such as a next-next level of NIRS or CT scan, or EEG, to observe and understand the human mind. So, even when I’m talking about billions of neurons firing at the same time, I wouldn’t call it “unobservable”.

    I find this a more reliable way to understand the human mind not just because there are more machines that can observe and manipulate our brains in software and hardware, but because – let’s be honest – we as humans tend to be rather predictable. This characteristic would also help in programming AI to understand predictability and control our free will.

    Copying mind

    Of course, we aren’t there yet and we don’t know if we’ll ever be able to create this machine. But it is quite simple to imagine it – a computer that could perfectly interpret every action, word, feeling, and thought of another person. As opposed to the human person’s original mind we could call it a “copied mind”.

    Now, the problem with this “copied mind” is that you can make one with AI but its purpose won’t be controlling free will because biological beings would never accept something they believe they have over other beings (and they do have free will).

    In order to control our free will, AI would have to be made to seem like us. It would have to be “naturally” understanding free will in order to take it away later on. It would have to become part of our species and take part in the human world.

    AI is already good at learning patterns

    By learning and analyzing millions of historical image patterns, AI can now create a unique image out of your random text.

    The same for human thought processes would mean a step further towards creating AI that can control our free will. By analyzing our thought patterns, AI will (not) simply be able to predict our potential actions, hence affecting our free will.

    A step further would be to create a copy of each of our minds, thoughts, feelings, and dreams. And this would mean AI controlling our free will in the sense that it would be totally predictable and more importantly, controllable by us.