Author: Britney Foster

  • A machine learning-assisted wearable sensing-actuation system could enable speech for individuals without vocal cords

    A machine learning-assisted wearable sensing-actuation system could enable speech for individuals without vocal cords

    Great news for those lacking vocal cords!

    Researchers have just come up with a new throat patch that empowers speech for individuals without the use of traditional vocal cords. This machine learning-assisted wearable sensing-actuation system, described in a study published in Nature Communications on 12 March, translates muscle movements into speech without the need for conventional vocal cord function.

    The new device is a flexible patch that attaches to the neck and can convert muscle movements into speech. Essentially, you can communicate without relying on your vocal cords!

    But here’s the really clever part: this patch not only senses throat movements associated with speech, but it also uses those movements to generate its own power. That means no need for batteries or charging!

    This incredible device has offered hope for individuals struggling with voice issues due to conditions like damaged or paralyzed vocal cords, such as those in recovery from throat cancer surgery.

    Lead researcher Jun Chen, from the University of California, Los Angeles, got the idea after experiencing vocal strain during several hours of lecturing sessions, as reported by Live Science. He then began to imagine a way to solve this problem, to make it possible for a person to speak without using their vocal cords, also known as “vocal folds.”

    Motivated by this idea, Chen and his team worked hard to create a flexible patch that could help people who cannot speak or are recovering from a temporary vocal issue.

    The patch, which sticks onto the neck, senses throat muscle movements related to speech and turns them into electricity. What’s impressive is it works without needing batteries, making it easy to use every day.

    Made of five thin layers, including soft, flexible silicon and tiny magnets inside silicon, this patch creates electrical signals when your throat muscles move. Then, a clever computer program turns these signals into speech.

    throat_patch_AI_voice

    A machine learning-assisted wearable sensing-actuation system could enable speech for individuals without vocal cords. Image Credit: Agencies/Jun Chen/University of California, LA

    In a demonstration of the innovative tech, eight individuals without speech difficulties tested an algorithm’s ability to translate electrical impulses from a patch into speech.

    The algorithm performed impressively, achieving around 95% accuracy in converting these impulses into understandable speech. Participants uttered phrases like “Merry Christmas” and “I hope your experiments are going well” while stationary, walking, and running.

    Ziyuan Che, the lead author of the study from the University of California, Los Angeles, reported to AFP that certain words, such as ‘make’ and the name ‘Mark,’ which involve similar movements of throat muscles, could pose challenges for the patch in distinguishing between them.

    “But those two words usually appear in a long sentence like ‘I am going to make dinner,’ or ‘How you doing Mark?’,” Che added.

    Furthermore, in separate tests, participants were asked to either speak the sentences aloud or silently articulate them. Results showed that the algorithm effectively interpreted muscle movements in both scenarios, consistently generating the correct waveforms.

    But, its testing was limited to just the eight people saying a few phrases, and it has yet to be tested in people with speech disorders, as Chen said.

    Another limitation, as Chen said, is that the current manufacturing process for the patch would need to be scaled up and made more efficient for a large number of patches to be produced.

    Nearly a third of people suffer at least one voice disorder in their lifetime, according to the study.

    This discovery, which is a simple, user-friendly option compared to current devices, could change how people with voice problems communicate.

    However, Che believed that more advanced algorithms would allow the patch to translate larynx muscle movements “without the need of pre-recording the voice signals”. And he also cautioned it would be years before the prototype could potentially be used by patients.

    As of December, approximately 1 in 5 Americans surveyed has reported experiencing a voice disorder.

    Create a Speaking Avatar here:

  • Total blame is likely to go solely to autonomous robots responsible for deaths

    Total blame is likely to go solely to autonomous robots responsible for deaths

    A robot is any automatically operated machine that replaces human effort, following a set of instructions. It can be controlled remotely or have its own built-in control system. An autonomous robotic machine performs as a co-worker, aiding in tasks typically performed by humans, yet differing in appearance and manner of operation. Today, the global ratio of robots to humans in the manufacturing industry is 1 to 71, with over 3.4 million industrial robots worldwide. But rapid evolution of robotic technology with autonomy and agency and their increasing use in industries has raised concerns among stakeholders and scientists.

    As robots are becoming more sophisticated, they are performing a wider range of tasks with less human involvement. And studies have suggested that further advancements might lead to robots being held accountable for unfortunate incidents, particularly those causing harm to civilians. Dr. Rael Dawtry, who led a study at the University of Essex’s Department of Psychology, raises very crucial questions about how responsibility should be determined in case of accidents as robots take on riskier tasks with less human control.

    Interestingly, the study, published in The Journal of Experimental Social Psychology, found that simply labeling machines as “autonomous robots” rather than “machines” increased perceptions of agency and blame.

    Assigning blame promptly is the current tendency

    Blaming a robot for accidents might seem pointless since they don’t have feelings. Robots’ actions are controlled by their programming, which is the responsibility of humans like designers and users. Similar to how blame falls on the manufacturer when a car has a defect, it often falls on those who oversee the safety of autonomous vehicles.

    Even though robots might seem to make their own decisions, people still tend to blame them, especially in situations involving harm. This blame extends even when the robot’s choices aren’t clear or when accidents occur due to human error or mechanical issues.

    Despite this, people tend to assign blame quickly, even if the robot’s actions weren’t intentional. This suggests that people attribute higher levels of agency to robots, leading to increased blame when things go wrong, even if the robots lack subjective experience.

    Why We Tend to Blame Robots

    Understanding why we blame robots involves looking at two things: agency and experience. Basically, we tend to see robots as having some level of human-like ability to think and act on their own, and we also sometimes think they can feel things like humans do.

    We’re pretty quick to think robots have agency, especially when they move around by themselves or seem human-like. This helps us understand their actions based on what we know about how people behave. If a robot does something unexpected or harmful, we’re more likely to see it as having agency and therefore being responsible for what it did.

    When we think a robot could have made different choices to avoid causing harm, we’re more likely to blame it. This is because we see agency as involving the ability to foresee what might happen and choose different actions. So, the more agency we think something has, the more we’re likely to hold it accountable.

    As for experience, it’s a bit less clear-cut. Sometimes we think robots can feel things, especially if they look human-like, but it’s not as strong as our tendency to see them as having agency. Still, considering both agency and experience can help us decide who’s to blame for what. If we see a robot as having experience, we might be more likely to blame it, especially if we think it should feel bad about what it did.

    Are robots solely responsible for deaths?

    Of course not!

    The study found that the more advanced robot was seen as having more control compared to the less advanced one. However, when it came to blaming for mistakes, the sophistication of the robot didn’t really matter. Instead, who was being blamed depended on the situation.

    The research looked into how people judge the actions of robots. They discovered that people tend to see robots as having more control and are more likely to blame them for mistakes. This blaming happens because people think robots have more power to make decisions.

    In addition, the researchers came to know that simply calling machines “autonomous robots” instead of just “machines” increases the perception of their control and blame. What this exactly suggests is people automatically assume that autonomous robots have more human-like qualities, like making decisions.

    When it comes to deciding who’s responsible for accidents involving robots, it’s a big topic in ethics and law, especially with things like autonomous weapons. And these findings show that as robots become more independent, people may hold them more accountable for their actions. Or, they will assume themselves to be less powerful than those machines.

    Whom to blame, then?

    The research has suggested that people tend to blame robots more than machines for accidents, especially when robots are labeled as “autonomous.” Even when robots and machines had similar levels of experience, participants still leaned towards blaming robots more, with an increase in blame of 39% (p < .05). This indicates that how sophisticated and autonomous a robot appears influences how much blame it receives.

    However, despite the tendency to blame robots, humans were consistently blamed more than robots in accidents with humans being blamed 63% more than robots (p < .05). This raises questions about how responsibility should be assigned in situations involving autonomous machines.

    The essence of this research, indeed, is less about arriving at a definitive answer to the question of whom to blame in situations involving autonomous machines and more about presenting the complexity of assigning responsibility in such scenarios.

    References:

    • https://www.sciencedirect.com/science/article/pii/S0022103123001397
    • www.euractiv.com/section/transport/opinion/whos-to-blame-when-your-autonomous-car-kills-someone/
    • www.bmj.com/content/363/bmj.k4791
    • https://apnews.com/article/technology-business-traffic-government-and-politics-a16c1aba671f10a5a00ad8155867ac92
  • Oracle introduces generative financial AI

    Oracle introduces generative financial AI

    Oracle Corporation, an American multinational computer technology company, has introduced new artificial intelligence features customized for financial operations. These features are designed to help businesses optimize operations, compete effectively, and make informed decisions in today’s dynamic market environments.

    Working together with Cohere, an AI startup led by ex-Google staff, Oracle has heavily invested in advanced technologies such as Nvidia chips to ensure exceptional performance. Unlike typical AI chatbots, Oracle’s AI is modified to meet specific business requirements.

    Steve Miranda, Oracle’s executive vice president, said that the customer-centric approach, with over 50 AI applications integrated into Oracle Fusion Applications.

    “We are committed to delivering innovation that matters to our customers, and the combination of OCI, Fusion Applications, and the thousands of customers that use these applications daily enables us to continually improve our services and deliver best-in-class AI,” reports cited Miranda as saying.

    These applications cover finance, supply chain, HR, sales, marketing, and customer service, respecting data privacy and security.

    According to Miranda, it offers tools for generating reports, simplifying complex data, and aiding tasks like drafting job descriptions and negotiating with suppliers. What’s important to note is that human oversight is incorporated to guarantee accuracy and reliability.

    “With additional embedded capabilities and an expanded extensibility framework, our customers can quickly and easily take advantage of the latest generative AI advancements to help increase productivity, reduce costs, expand insights, and improve the employee and customer experience,” Miranda explained.

    Oracle’s Guided Journeys framework is unique because it lets organizations customize their AI tools to fit their specific needs. This flexibility enables quick innovation and easy adaptation to changes in the market, all supported by Oracle Cloud’s strong infrastructure. (Source here)



    Create a Speaking Avatar here:

  • Authors are worried about the growing number of AI ‘scam’ books on Amazon

    Authors are worried about the growing number of AI ‘scam’ books on Amazon

    The rise of AI-generated “scam” books on Amazon is causing headaches for dedicated authors. Many are reportedly finding fake versions of their own books alongside the real ones, which is confusing for readers and damaging to the authors’ reputations.

    In January, AI researcher Melanie Mitchell found a copycat version of her book, “Artificial Intelligence: A Guide for Thinking Humans,” on Amazon. It was written by someone using the name “Shumaila Majid.” Despite trying to mimic Mitchell’s ideas, the counterfeit book lacked the depth and quality of the original. Analysis confirmed that it was likely generated by AI.

    Such events are not uncommon in this Generative AI era. Renowned computer scientist Fei-Fei Li faced a similar situation when multiple summaries of her memoir flooded Amazon, according to WIRED. Despite disclaimers stating they were summaries, these books didn’t offer much value to readers.

    The scary thing about AI is that it’s not only text that it can generate. Actually, there’s this online platform called Synthesia that lets you (yes, you) generate videos where someone speaks based on your input. It’s not just text or recorded speech; it’s an actual talking video.

    When Kara Swisher, a tech journalist, came out with her new book, Burn Book, there were reports of seemingly artificial intelligence-generated biographies of her suddenly coming up on Amazon. Swisher immediately responded, telling The New York Times’ Hard Fork podcast, “I sent (Amazon CEO) Andy Jassy a note and said, ‘What the f***?’ You’re costing me money.”

    Swisher was successful in getting the offending books removed from Amazon. But the problem of AI-generated scam books remains a widespread concern for authors. Most authors do not have the same direct contact with the CEO of Amazon via email.

    The problem is getting worse because AI can churn out these summaries quickly, flooding the market with low-quality, soulless content.

    Mary Rasenberger, CEO of the Authors Guild, a group advocating for writers, is reported as saying, “Scam books on Amazon have been a problem for years.” But she informs that the problem has multiplied in recent months.

    “Every new book seems to have some kind of companion book, some book that’s trying to steal sales.”

    It’s incredibly distressing, especially at a time when nations are officially considering AI as a potential threat to humanity.

    According to Lindsay Hamilton, a spokesperson for Amazon, the company has made changes regarding AI-generated content. They now require publishers using Kindle Direct Publishing to indicate if their content is AI-generated. Moreover, Amazon has put a limit on the number of titles that can be published in a day.

    Unfortunately, it’s still unclear how authors can legally fight back. While some argue that summaries are okay as long as they don’t directly copy the original text, others question whether these summaries are too similar to the original works. And within this context, there’s a separate faction deliberating on whether scraped data and articles should be permissible for training AI models.

    Authors and experts are calling on Amazon to take more prominent steps to stop these alleged scams and protect both authors and readers. For now, authors face the threat of their work being exploited, and without due caution, readers might find themselves purchasing subpar material. If you are an author and want to protect your work from AI, you may want to read the article below:

    Practical Tips for Authors to Protect Their Works from AI Use – The Authors Guild

  • AI could pose ‘extinction-level’ threat to humans, US state department reports

    AI could pose ‘extinction-level’ threat to humans, US state department reports

    Whenever artificial intelligence comes up in conversation, we often talk about the risks it poses, like the possibility of it being used as a weapon, the difficulty of controlling really advanced AI and the danger of cyberattacks powered by AI.

    The nightmares finally appear to be turning into reality: a new report commissioned by the US State Department has cautioned that failure to intervene in the advancement of AI technologies could result in ‘catastrophic consequences‘.

    Based on discussions and interviews with over 200 experts, including industry leaders, cybersecurity researchers, and national security officials, the report from Gladstone AI, released this week, highlights serious national security risks related to AI. It warns that advanced AI systems have the potential to cause widespread devastation that could threaten humanity’s existence.

    Jeremie Harris, CEO and co-founder of Gladstone AI, said the ‘unprecedented level of access’ his team had to officials in the public and private sector led to the startling conclusions. Gladstone AI said it spoke to technical and leadership teams from ChatGPT owner OpenAI, Google DeepMind, Facebook parent Meta and Anthropic.

    “Along the way, we learned some sobering things,” Harris said in a video posted on Gladstone AI’s website announcing the report.

    “Behind the scenes, the safety and security situation in advanced AI seems pretty inadequate relative to the national security risks that AI may introduce fairly soon.”

    The report has identified two main dangers of AI: the risk of it being used as harmful weapons and the concern of losing control over it, which could have serious security consequences.

    “Though its effects have not yet been catastrophic owing to the limited capabilities of current AI systems, advanced AI has already been used to design malware, interfere in elections, and execute impersonation attacks,” the report reads.

    AI is already making a big economic impact, according Harris. Harris said it could help us cure diseases, make new scientific discoveries, and tackle challenges we once thought were impossible to overcome.

    The downside? “But it could also bring serious risks, including catastrophic risks, that we need to be aware of,” Harris cautioned.

    “And a growing body of evidence – including empirical research and analysis published in the world’s top AI conferences – suggests that above a certain threshold of capability, AIs could potentially become uncontrollable.”

    The report’s release has led to calls for immediate action from policymakers. The White House has emphasized President Biden’s executive order on AI as a crucial step in managing its risks.

    White House spokesperson Robyn Patterson said President Joe Biden’s executive order on AI is the “most significant action any government in the world has taken to seize the promise and manage the risks of artificial intelligence.”

    “The President and Vice President will continue to work with our international partners and urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies,” Patterson said.

    However, experts agree that more strict measures are needed to address AI’s potential threats.

    About a year ago, Geoffrey Hinton, known as the “Godfather of AI,” left his job at Google and raised concerns about the AI technology he helped create. Hinton has suggested that there’s a 10% chance AI could lead to human extinction within the next 30 years.

    Hinton and many other leaders in the AI field, along with academics, signed a statement last June stating that preventing AI-related extinction risks should be a global priority.

    Despite pouring billions of dollars into AI investments, business leaders are increasingly worried about these risks. In a survey conducted at the Yale CEO Summit last year, 42% of CEOs said AI could potentially harm humanity within the next five to ten years.

    Gladstone AI’s report highlighted warnings from notable figures like Elon Musk, Lina Khan, Chair of the Federal Trade Commission, and a former top executive at OpenAI about the existential risks of AI. Some employees in AI companies share similar concerns privately, according to Gladstone AI.

    Gladstone AI also revealed that experts at cutting-edge AI labs were asked to share their personal estimates of the chance that an AI incident could cause “global and irreversible effects” in 2024. Estimates varied from 4% to as high as 20%, though the report noted these estimates were informal and likely biased.

    The speed of AI development, particularly Artificial General Intelligence (AGI), which could match or surpass human learning abilities, is a significant unknown. The report notes that AGI is seen as the main risk driver and mentions public statements from organizations like OpenAI, Google DeepMind, Anthropic, and Nvidia, suggesting AGI could be achieved by 2028, although some experts believe it’s much further away.

    Disagreements over AGI timelines make it challenging to create policies and safeguards. There’s a risk that if AI technology develops slower than expected, regulations could potentially be harmful.

    Gladstone AI has strictly warned that the development of AGI and similar capabilities “would introduce catastrophic risks unlike any the United States has ever faced.” This could include scenarios like AI-powered cyberattacks crippling critical infrastructure or disinformation campaigns destabilizing society.

    In addition, the report also raises concerns about weaponized robotic applications, psychological manipulation, and AI systems seeking power and becoming adversarial to humans. Advanced AI systems might even resist being turned off to achieve their goals, the report suggests.

    The report strongly suggests establishing a new AI agency and implementing emergency regulations to limit the development of overly capable AI systems. Moreover, it calls for stricter controls on the computational power used to train AI models to prevent misuse.

    Irrespective of governmental discourse, AI’s takeover occurs through various channels—see how, in 2024, text brings forth lifelike speaking avatars.

    Your least desired outcome at this moment: AI breaking free from the machine.

  • TomoDRGN empowers scientists to visualize proteins’ shape changes using algorithms

    TomoDRGN empowers scientists to visualize proteins’ shape changes using algorithms

    Tomography-Derived Reconstructive Generative Network (tomoDRGN), the latest innovation in the field of molecular biology, is revolutionizing scientists’ ability to visualize proteins’ shape changes using advanced computational algorithms. MIT graduate student Barrett Powell and his colleagues have developed this new approach to understanding the structural dynamics of proteins within their native cellular environment.

    Nowadays, cryogenic electron tomography (cryo-ET) has emerged as a favored technique for for studying proteins in their natural setting. By taking pictures of frozen cells from various angles, scientists can get a 3D view of protein structures. This is pretty cool because it lets researchers see exactly how and where proteins team up with each other, giving insights into their interactions within the cell.

    With the current imaging technology for proteins in their natural setting, MIT graduate student Barrett Powell thought if they could go one step further: What if we could witness molecular machines in motion?

    In a recent article published in Nature Methods, Powell has introduced his invention, named tomoDRGN. This technique is designed to depict the structural variances observed in proteins within cryo-ET data, which stem from protein motions or interactions with different partners- an occurrence known as structural heterogeneity.

    In traditional methods, proteins are typically imaged only once in purified samples. However, in cryo-ET, each protein is imaged multiple times from different angles, sometimes over 40 times. This presented a challenge for tomoDRGN due to the overwhelming amount of data. To overcome this problem, Powell ‘upgraded’ the cryoDRGN model to prioritize the highest-quality data.

    “By excluding some of the lower-quality data, the results were actually better than using all of the data – and the computational performance was substantially faster,” Powell says.

    When imaging the same protein multiple times, radiation damage can occur. As a result, the initial images are often clearer since they suffer less damage, according to Powell.

    An interesting result of tomoDRGN came about when the researchers shared raw data showing ribosomes inside cells at near-atomic resolution. Powell applied tomoDRGN to this dataset, unveiling differences in the structure around ribosomal particles. Some of the ribosomes were found near a bacterial cell membrane, where they were participating in a process known as cotranslational translocation. This happens when a protein is being made and moved across a membrane simultaneously.

    In addition, tomoDRGN’s demonstrated capability for identifying uncommon structural states within protein populations highlights its accuracy and efficiency. For instance, in another experiment involving the protein apoferritin, tomoDRGN detected a small group of ferritin particles with iron bound to them, making up just 2 percent of the dataset. This discovery emphasizes the method’s ability to spot subtle variations that might be missed by traditional methods.

    Powell and his colleagues see many ways to use tomoDRGN to improve our understanding of how cells work. This finding, as they say, has opened the door to generating new theories about how ribosomes collaborate with crucial protein machinery responsible for moving proteins beyond the cell’s borders. They’re especially excited about exploring how it can help with studying ribosomes and other parts of cells.

    Algorithms have become indispensable today. You can try this online tool to make text talk like a person!

    Click to Create One

  • The edited princess Kate photo probably wasn’t made with AI

    A manipulated photograph of the Duchess of Cambridge, Kate Middleton, has currently stirred up controversy online. The image, displaying Kate with an elongated neck and altered facial features, ignited discussions about the potential involvement of artificial intelligence (AI) in its creation. However, experts remain doubtful about AI’s role.

    The photo emerged on various social media platforms and rapidly spread across the internet. Depicting Kate Middleton in an uncanny and distorted manner, it prompted speculations regarding the utilization of advanced AI algorithms for the alterations. Many speculated that an AI tool might have been employed to manipulate the image.

    This photograph, issued on Sunday, March 10, 2024, by Kensington Palace shows Kate, Princess of Wales, with her children, Prince Louis, left, Prince George and Princess Charlotte. The circled areas appear to show evidence of potential manipulation. Credit: Kensington Palace

    The photograph was initially released by Kensington Palace on the couple’s official Instagram account to commemorate Mother’s Day in the United Kingdom. Following standard protocol for official UK royal photographs, it was simultaneously distributed to news and photo agencies.

    The picture featured Kate surrounded by her children, Prince George, Princess Charlotte, and Prince Louis, appearing relaxed and joyful. It’s suggested that Prince William, credited with taking the photo, might have contributed to the laughter captured in the image.

    Controversies

    The release of the photograph sparked further controversy rather than quelling rumors. Several global news agencies retracted the image from circulation hours later, citing concerns of manipulation.

    The situation exacerbated existing public concerns following Kate’s surgery in January. Despite the palace’s announcement of a two to three-month recovery period, uncertainties surrounding Kate’s medical condition fueled speculation on social media, ranging from the insensitive to the outlandish.

    Questions about the authenticity of the Mother’s Day image surfaced on social media platforms, with eagle-eyed observers scrutinizing every detail. The Associated Press retracted the image, suggesting potential manipulation by the source, prompting similar actions from other international news agencies.

    This area on the jacket of the Princess of Wales appears to show a misaligned break along the zipper. Credit: Kensington Palace

    A closer examination of the photo revealed discrepancies, such as inconsistencies in Princess Charlotte’s sleeve cuff and misalignment in the zipper on Kate’s jacket. These findings raised doubts about the photo’s authenticity.

    The controversy surrounding the altered image has strained the relationship between the palace and media organizations. Transparency regarding the photo’s adjustments was lacking, damaging trust between the two parties.

    Why AI is Unlikely

    • Artifacts and Inconsistencies: Digital forensics experts have analyzed the photo and identified several inconsistencies that are unlikely to result from AI. These include pixel artifacts, irregularities in lighting, and distortions that do not align with typical AI-generated images.
    • Complexity of AI Algorithms: While AI has made significant strides in image manipulation, creating a convincing and consistent distortion like the one seen in the Kate photo would require sophisticated algorithms and extensive training data. The sudden appearance of this image without any prior examples raises doubts about its AI origin.
    • Human Expertise: Human photo editors and artists have long been adept at creating surreal and fantastical images. The techniques used in the Kate photo align more closely with traditional editing methods than with AI-generated content.

    While the royal consort issued a personal apology, attributing the alterations to her experimentation with editing as an amateur photographer, skepticism remains widespread. And the incident has cast a shadow over the royal family’s annual Commonwealth Day celebration, further complicating matters for the monarchy.

    Today, it’s harder to tell what’s real and what’s not in digital spaces as AI is gradually getting better and smarter. We know AI has become capable of altering pictures, so it’s important not to believe everything you see without proper verification.

    AI is getting more advanced day by day. You can try this online tool to make text talk like a person!

    Click to Create One

  • xAI will open-source Grok this week, Elon Musk says

    Elon Musk tweeted on Monday that xAI, his artificial-intelligence startup, will be sharing its powerful chatbot, Grok, with the public this week.

    This development comes after the billionaire’s recent legal battle with OpenAI. Earlier this month, Musk sued ChatGPT-maker OpenAI and its CEO Sam Altman, saying they had given up the startup’s original mission to develop artificial intelligence for the benefit of humanity and not for profit.

    Musk showed his interest in open-source AI during a podcast chat with computer scientist Lex Fridman in November. Shortly after, his company released the AI model to a small user base.

    In December, xAI launched Grok, its rival to ChatGPT, for Premium+ subscribers of the X social media platform. The Tesla CEO aims to create an AI focused on seeking maximum truth with Grok.

    By open-sourcing Grok, xAI joins a growing trend among companies, including Meta and Mistral, in sharing their AI technologies with the public.

    It’s the age of AI. Check out this online platform where you can transform text into expressive speaking avatars!

    Click to Create One

  • Authors sued Nvidia over AI use of copyrighted works

    Three authors have filed a lawsuit against Nvidia, alleging that the company used their copyrighted books without permission to train its NeMo AI platform.

    Nvidia used a dataset of approximately 196,640 books, which included their own works, to train NeMo in replicating everyday written language, according to claims by Brian Keene, Abdi Nazemian, and Stewart O’Nan.

    This dataset was removed in October due to copyright infringement concerns.

    In the alleged class action filed on Friday night in San Francisco federal court, the authors argue that by admitting to training NeMo on this dataset, Nvidia infringed upon their copyrights.

    They are seeking unspecified damages on behalf of individuals in the United States whose works were used to train NeMo over the past three years.

    Among works included in the lawsuit are Keene’s “Ghost Walk,” Nazemian’s “Like a Love Story,” and O’Nan’s “Last Night at the Lobster.”

    Nvidia has not yet publicly commented on the matter. And the authors’ legal representatives have yet to provide responses.

    This legal action adds Nvidia to the list of companies facing challenges over generative AI technology.

    The New York Times (NYT.N) also filed a lawsuit against OpenAI and Microsoft (MSFT.O) in December, alleging unauthorized use of millions of its articles. The lawsuit claims that the companies utilized the articles to train chatbots aimed at delivering information to users without proper permission.

    With the rise of AI, Nvidia remains a popular choice for investors because its stock price keeps rising significantly and it holds a large market value.

    [NVIDIA Corp: 875.28 USD, −51.41 (5.55%): Closed: Mar 8, 7:59 PM EST]

    If you’re curious about what AIs are capable of, check out this online platform where you can transform text into expressive speaking avatars!

    Click to Create One

  • MIT scientists enhance AI’s peripheral vision gaining insights from humans

    MIT scientists enhance AI’s peripheral vision gaining insights from humans

    Researchers at MIT have made a remarkable progress in equipping artificial intelligence (AI) with a form of peripheral vision similar to that of humans. The ability to detect objects outside the direct line of sight, albeit with less detail than that of humans, is a fundamental aspect of human vision. Now, scientists aim to replicate this capability in AI systems to enhance their understanding of visual scenes and potentially improve safety measures in various applications.

    For this, the MIT team, led by Anne Harrington, developed an innovative image dataset to help AI models simulate peripheral vision. Through training with this dataset, they observed improvements in object detection capabilities.

    A surprising finding was humans’ ability to detect objects in their periphery, outperforming AI models. Despite various attempts to enhance AI vision, including training from scratch and fine-tuning pre-existing models, machines consistently lagged behind human capabilities, particularly in detecting objects in the far periphery.

    “There is something fundamental going on here. We tested so many different models, and even when we train them, they get a little bit better but they are not quite like humans. So, the question is: What is missing in these models?” says Vasha DuTell, a co-author of a paper.

    This research points towards the complexities of human vision and the challenges in replicating it artificially. Harrington and her team plan to further research on these disparities, aiming to develop AI models that accurately predict human performance in peripheral vision tasks.

    “Modeling peripheral vision, if we can really capture the essence of what is represented in the periphery, can help us understand the features in a visual scene that make our eyes move to collect more information,” DuTell explains.

    Moreover, this work emphasizes the importance of interdisciplinary collaboration between neuroscience and AI research. By drawing insights from human vision mechanisms, scientists can refine AI systems to better mimic human perception, leading to more reliable applications.

    The team initially utilized a method called the texture tiling model, commonly employed in studying human peripheral vision, to enhance accuracy. They then tailored this model to offer greater flexibility in altering images without requiring prior knowledge of the observer’s focal point.

    “That let us faithfully model peripheral vision the same way it is being done in human vision research,” says Harrington.

    In brief, this research marks a significant progress in bridging the gap between human and artificial vision. Such advancements could revolutionize safety systems, particularly in contexts like driver assistance technology, where detecting hazards in the periphery is crucial.

    What’s even more interesting here is that, while neural network models have advanced, they still cannot match human performance in this area. This underscores the need for more AI research to gain insights from human vision neuroscience. The database of images provided by the authors will greatly aid in this future research.

    This work, set to be presented at the International Conference on Learning Representations, is supported, in part, by the Toyota Research Institute and the MIT CSAIL METEOR Fellowship.

    Have you ever tried this AI with a wonderful sense of humor? Visit this online platform where you can transform text into expressive speaking avatars!

    Click to Create One