Category: AI

  • Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors

    Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors

    Artificial intelligence has presented remarkable opportunities to reduce mistakes, aid medical staff and offer patient services around the clock. As AI tools improve, there’s increasing potential to use them extensively in interpreting medical images, X-rays, and scans, diagnosing medical issues, and planning treatments. A new development has emerged in cancer detection: using AI in the National Health Service has shown how technology can find very small signs of breast cancer that doctors might miss.

    Mia, an AI tool tested with NHS doctors, looked at mammograms from over 10,000 women and found 11 cases of breast cancer that doctors hadn’t spotted. These cancers were caught very early, when they were hard to see, showing how AI can help find cancer sooner.

    Barbara was one of eleven patients who benefited from Mia’s advanced detection capabilities. Her case clearly demonstrates how AI can be instrumental in saving lives. Even though human radiologists didn’t catch it, Mia spotted Barbara’s 6mm tumor early on. This meant Barbara could get surgery quickly and only needed five days of radiotherapy. And, according to radiologist, patients with breast tumors smaller than 15mm usually have a good chance of survival, with a 90% rate over the next five years.

    Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors.
    Photo Credit: BBC

    BBC reported Barbara as saying that she was pleased the treatment was much less invasive than that of her sister and mother, who had previously also battled the disease.

    As Barbara had not experienced any noticeable symptoms, her cancer may not have been detected until her next routine mammogram three years later without the assistance of the AI tool.

    Mia and similar tools are expected to speed up the process of getting test results, potentially reducing the wait from 14 days to just three, as claimed by Kheiron, the developer. In the trial, Mia’s findings were always reviewed by humans. While currently, two radiologists examine each scan, there’s hope that eventually, one of them could be replaced by the AI tool, lightening the workload for each pair.

    Out of nearly 10,889 women in the trial, only 81 chose not to have their scans reviewed by the AI tool, according to Dr. Gerald Lip, the clinical director of breast screening in northwest Scotland who led the project.

    This shows that, AI tools, like Mia, are skilled at detecting symptoms of specific diseases if they’re trained on enough diverse data. This involves giving the program many different anonymized images of these symptoms from a wide range of people.

    Mia took six years to develop and train, according to Sarah Kerruish, Chief Strategy Officer of Kheiron Medical. It operates using cloud computing power from Microsoft and was trained on “millions” of mammograms sourced from “women all over the world”.

    Kerruish gave emphasis to the importance of inclusivity in developing AI for healthcare, reportedly saying, “I think the most important thing I’ve learned is that when you’re developing AI for a healthcare situation, you have to build in inclusivity from day one.

    But wait a moment! Let’s not overlook Mia’s imperfections. Sure, it’s a remarkable tool, but it’s not without its flaws. One major limitation is its lack of access to patients’ medical histories. This means it might flag cysts that were already deemed harmless in previous scans.

    In addition, Mia’s machine learning feature is disabled due to current health regulations that focus on the risks and biases of machine-learning algorithms at the level of input data, algorithm testing, and decision models. So, it can’t learn from its mistakes or improve over time. Each update requires a fresh review, adding to the workload.

    It’s also worthy to note that the Mia trial is just an initial test in one location. Although the University of Aberdeen validated the research independently, the results haven’t undergone peer review yet.

    Still, the Royal College of Radiologists acknowledges the potential of this technology. “These results are encouraging and help to highlight the exciting potential AI presents for diagnostics. There is no question that real-life clinical radiologists are essential and irreplaceable, but a clinical radiologist using insights from validated AI tools will increasingly be a formidable force in patient care,” said Dr Katharine Halliday, President of the Royal College of Radiologists.

    Dr. Julie Sharp, from Cancer Research UK, stresses the crucial role of technological innovation in healthcare, especially with the growing number of cancer cases.

    “More research will be needed to find the best ways to use this technology to improve outcomes for cancer patients,” she added.

    Furthermore, various other healthcare-related AI trials are underway across the UK. For example, Presymptom Health is developing an AI tool to analyze blood samples for early signs of sepsis before symptoms manifest.

    Mia has sparked hope among potential cancer patients; however, many of such trials are still in their infancy, awaiting published results.

  • Two AIs talk to each other first time in a purely linguistic way

    Two AIs talk to each other first time in a purely linguistic way

    It was a long-standing challenge to teach artificial intelligence to comprehend and execute tasks solely through verbal or written instructions. A groundbreaking discovery has now emerged from researchers at the University of Geneva, who published their findings in Nature Neuroscience on Monday. The paper has detailed an unprecedented AI model that not only excels at tasks but can also communicate with another AI in a purely linguistic manner, enabling the latter to replicate the tasks.

    Humans have a special talent for learning new things just by hearing or reading about them, and then explaining them to others. This ability sets us apart from animals, which usually need lots of practice and can’t pass on what they’ve learned.

    In the world of computers, there’s a field called Natural Language Processing that tries to copy this human skill. It aims to make machines understand and respond to spoken or written words. This technology uses artificial neural networks, which are like simplified versions of the connections between neurons in our brains.

    But, even though we’ve made progress, we still don’t fully understand all the complicated brain processes involved. So while computers can understand language to some extent, they’re not quite as good as humans at grasping all the intricacies.

    So, teaching AI to understand human language and do tasks was really tough before this. But the UNIGE team has now created an AI model using something called artificial neural networks, which act like the brain’s neurons. This AI learned to do simple jobs like finding things or reacting to what it sees, and then it explained those tasks in words to another AI.

    Two AIs talk to each other first time in a purely linguistic way

    a,b, Illustrations of example trials as they might appear in a laboratory setting. The trial is instructed, then stimuli are presented with different angles and strengths of contrast. The agent must then respond with the proper angle during the response period. a, An example AntiDM trial where the agent must respond to the angle presented with the least intensity. b, An example COMP1 trial where the agent must respond to the first angle if it is presented with higher intensity than the second angle otherwise repress response. c, Diagram of model inputs and outputs. Sensory inputs (fixation unit, modality 1, modality 2) are shown in red and model outputs (fixation output, motor output) are shown in green. Models also receive a rule vector (blue) or the embedding that results from passing task instructions through a pretrained language model (gray). A list of models tested is provided in the inset. Description/Image Credit: nature.com

    Dr. Alexandre Pouget, a professor at UNIGE’s Faculty of Medicine, said this was a big deal because while AI can understand and make text or images, it’s not good at turning words into actions.

    ”Currently, conversational agents using AI are capable of integrating linguistic information to produce text or an image. But, as far as we know, they are not yet capable of translating a verbal or written instruction into a sensorimotor action, and even less explaining it to another artificial intelligence so that it can reproduce it,” Pouget said.

    The AI model they built has a complex network of artificial neurons that mimic parts of the brain responsible for language.

    ”We started with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to understand language. We ‘connected’ it to another, simpler network of a few thousand neurons,” explains Reidar Riveland, a PhD student in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine, and first author of the study.

    In the experiment’s initial phase, the researchers trained the AI to mimic Wernicke’s area, responsible for language comprehension. Then, they moved to the next stage, where the AI was taught to replicate Broca’s area, aiding in speech production.

    Remarkably, all of this was done using standard laptop computers. The AI received written instructions in English, such as indicating directions or identifying brighter objects.

    Once the AI mastered these tasks, it could explain them to another AI, effectively teaching it the tasks. This was the first instance of two AIs conversing solely through language, according to the researchers.

    ”Once these tasks had been learned, the network was able to describe them to a second network — a copy of the first – so that it could reproduce them. To our knowledge, this is the first time that two AIs have been able to talk to each other in a purely linguistic way,” says Alexandre Pouget, who led the research.

    This breakthrough could have a huge impact on robotics, according to Dr. Pouget. Imagine robots that can understand and talk to each other, making them incredibly useful in factories or hospitals.

    Dr. Pouget believes this could lead to a future where machines work together with humans in ways we’ve never seen before, making things faster and more efficient.

  • GPT-powered humanoid figure 01 masters speaking and reasoning on the job

    GPT-powered humanoid figure 01 masters speaking and reasoning on the job

    A new breakthrough in artificial intelligence has been achieved through the collaboration of Figure and OpenAI. They’ve demonstrated the impressive abilities of their humanoid robot, Figure 01, in a groundbreaking video released on March 13.

    The progress made by Figure in building humanoid robots is truly impressive. Led by entrepreneur Brett Adcock, the company quickly gathered experts from top companies like Boston Dynamics, Tesla, Google DeepMind, and Archer Aviation. Their goal? To create the first general-purpose humanoid robot that’s commercially viable.

    The journey from idea to reality has been fast. By October, Figure 01 was already up and running, doing basic tasks on its own. By the end of the year, it could learn from watching and was ready to start working at BMW by mid-January.

    During a recent warehouse demonstration, we got a peek into what the future of robotics might look like. This demonstration happened at the same time as Figure announced some big news: they’ve successfully secured Series B funding and teamed up with OpenAI.

    Together, they’re working on creating advanced AI models designed specifically for humanoid robots.

    Adcock, an American technology entrepreneur and the founder and CEO of Figure AI, an AI startup working on a general-purpose humanoid robot, wrote on a social media platform that the collaboration aims to accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.

    Adcock shared important details in the post, explaining that Figure 01’s cameras send data to a smart system trained by OpenAI.

    At the same time, Figure’s own networks process images quickly. OpenAI’s work contributes to the robot’s ability to understand spoken commands. This capability ensures that the robot can act precisely in response to verbal instructions.

    Moreover, Adcock also made it clear that the demo wasn’t controlled remotely; instead, this demonstration showed that the robot can work on its own.

    The progress is really impressive, with Adcock aiming for global-scale operations where humanoid robots play a big role.

    They’ll be utilizing this investment to fast-track Figure’s plans for deploying humanoid robots commercially, Adcock writes in the post. These funds will be directed towards various aspects, including AI training, manufacturing, deploying more robots, expanding the engineering team, and pushing forward with commercial deployment efforts.

    Correction: An earlier version of this article incorrectly spelled the title of this article.

  • Total blame is likely to go solely to autonomous robots responsible for deaths

    Total blame is likely to go solely to autonomous robots responsible for deaths

    A robot is any automatically operated machine that replaces human effort, following a set of instructions. It can be controlled remotely or have its own built-in control system. An autonomous robotic machine performs as a co-worker, aiding in tasks typically performed by humans, yet differing in appearance and manner of operation. Today, the global ratio of robots to humans in the manufacturing industry is 1 to 71, with over 3.4 million industrial robots worldwide. But rapid evolution of robotic technology with autonomy and agency and their increasing use in industries has raised concerns among stakeholders and scientists.

    As robots are becoming more sophisticated, they are performing a wider range of tasks with less human involvement. And studies have suggested that further advancements might lead to robots being held accountable for unfortunate incidents, particularly those causing harm to civilians. Dr. Rael Dawtry, who led a study at the University of Essex’s Department of Psychology, raises very crucial questions about how responsibility should be determined in case of accidents as robots take on riskier tasks with less human control.

    Interestingly, the study, published in The Journal of Experimental Social Psychology, found that simply labeling machines as “autonomous robots” rather than “machines” increased perceptions of agency and blame.

    Assigning blame promptly is the current tendency

    Blaming a robot for accidents might seem pointless since they don’t have feelings. Robots’ actions are controlled by their programming, which is the responsibility of humans like designers and users. Similar to how blame falls on the manufacturer when a car has a defect, it often falls on those who oversee the safety of autonomous vehicles.

    Even though robots might seem to make their own decisions, people still tend to blame them, especially in situations involving harm. This blame extends even when the robot’s choices aren’t clear or when accidents occur due to human error or mechanical issues.

    Despite this, people tend to assign blame quickly, even if the robot’s actions weren’t intentional. This suggests that people attribute higher levels of agency to robots, leading to increased blame when things go wrong, even if the robots lack subjective experience.

    Why We Tend to Blame Robots

    Understanding why we blame robots involves looking at two things: agency and experience. Basically, we tend to see robots as having some level of human-like ability to think and act on their own, and we also sometimes think they can feel things like humans do.

    We’re pretty quick to think robots have agency, especially when they move around by themselves or seem human-like. This helps us understand their actions based on what we know about how people behave. If a robot does something unexpected or harmful, we’re more likely to see it as having agency and therefore being responsible for what it did.

    When we think a robot could have made different choices to avoid causing harm, we’re more likely to blame it. This is because we see agency as involving the ability to foresee what might happen and choose different actions. So, the more agency we think something has, the more we’re likely to hold it accountable.

    As for experience, it’s a bit less clear-cut. Sometimes we think robots can feel things, especially if they look human-like, but it’s not as strong as our tendency to see them as having agency. Still, considering both agency and experience can help us decide who’s to blame for what. If we see a robot as having experience, we might be more likely to blame it, especially if we think it should feel bad about what it did.

    Are robots solely responsible for deaths?

    Of course not!

    The study found that the more advanced robot was seen as having more control compared to the less advanced one. However, when it came to blaming for mistakes, the sophistication of the robot didn’t really matter. Instead, who was being blamed depended on the situation.

    The research looked into how people judge the actions of robots. They discovered that people tend to see robots as having more control and are more likely to blame them for mistakes. This blaming happens because people think robots have more power to make decisions.

    In addition, the researchers came to know that simply calling machines “autonomous robots” instead of just “machines” increases the perception of their control and blame. What this exactly suggests is people automatically assume that autonomous robots have more human-like qualities, like making decisions.

    When it comes to deciding who’s responsible for accidents involving robots, it’s a big topic in ethics and law, especially with things like autonomous weapons. And these findings show that as robots become more independent, people may hold them more accountable for their actions. Or, they will assume themselves to be less powerful than those machines.

    Whom to blame, then?

    The research has suggested that people tend to blame robots more than machines for accidents, especially when robots are labeled as “autonomous.” Even when robots and machines had similar levels of experience, participants still leaned towards blaming robots more, with an increase in blame of 39% (p < .05). This indicates that how sophisticated and autonomous a robot appears influences how much blame it receives.

    However, despite the tendency to blame robots, humans were consistently blamed more than robots in accidents with humans being blamed 63% more than robots (p < .05). This raises questions about how responsibility should be assigned in situations involving autonomous machines.

    The essence of this research, indeed, is less about arriving at a definitive answer to the question of whom to blame in situations involving autonomous machines and more about presenting the complexity of assigning responsibility in such scenarios.

    References:

    • https://www.sciencedirect.com/science/article/pii/S0022103123001397
    • www.euractiv.com/section/transport/opinion/whos-to-blame-when-your-autonomous-car-kills-someone/
    • www.bmj.com/content/363/bmj.k4791
    • https://apnews.com/article/technology-business-traffic-government-and-politics-a16c1aba671f10a5a00ad8155867ac92
  • Oracle introduces generative financial AI

    Oracle introduces generative financial AI

    Oracle Corporation, an American multinational computer technology company, has introduced new artificial intelligence features customized for financial operations. These features are designed to help businesses optimize operations, compete effectively, and make informed decisions in today’s dynamic market environments.

    Working together with Cohere, an AI startup led by ex-Google staff, Oracle has heavily invested in advanced technologies such as Nvidia chips to ensure exceptional performance. Unlike typical AI chatbots, Oracle’s AI is modified to meet specific business requirements.

    Steve Miranda, Oracle’s executive vice president, said that the customer-centric approach, with over 50 AI applications integrated into Oracle Fusion Applications.

    “We are committed to delivering innovation that matters to our customers, and the combination of OCI, Fusion Applications, and the thousands of customers that use these applications daily enables us to continually improve our services and deliver best-in-class AI,” reports cited Miranda as saying.

    These applications cover finance, supply chain, HR, sales, marketing, and customer service, respecting data privacy and security.

    According to Miranda, it offers tools for generating reports, simplifying complex data, and aiding tasks like drafting job descriptions and negotiating with suppliers. What’s important to note is that human oversight is incorporated to guarantee accuracy and reliability.

    “With additional embedded capabilities and an expanded extensibility framework, our customers can quickly and easily take advantage of the latest generative AI advancements to help increase productivity, reduce costs, expand insights, and improve the employee and customer experience,” Miranda explained.

    Oracle’s Guided Journeys framework is unique because it lets organizations customize their AI tools to fit their specific needs. This flexibility enables quick innovation and easy adaptation to changes in the market, all supported by Oracle Cloud’s strong infrastructure. (Source here)



    Create a Speaking Avatar here:

  • Authors are worried about the growing number of AI ‘scam’ books on Amazon

    Authors are worried about the growing number of AI ‘scam’ books on Amazon

    The rise of AI-generated “scam” books on Amazon is causing headaches for dedicated authors. Many are reportedly finding fake versions of their own books alongside the real ones, which is confusing for readers and damaging to the authors’ reputations.

    In January, AI researcher Melanie Mitchell found a copycat version of her book, “Artificial Intelligence: A Guide for Thinking Humans,” on Amazon. It was written by someone using the name “Shumaila Majid.” Despite trying to mimic Mitchell’s ideas, the counterfeit book lacked the depth and quality of the original. Analysis confirmed that it was likely generated by AI.

    Such events are not uncommon in this Generative AI era. Renowned computer scientist Fei-Fei Li faced a similar situation when multiple summaries of her memoir flooded Amazon, according to WIRED. Despite disclaimers stating they were summaries, these books didn’t offer much value to readers.

    The scary thing about AI is that it’s not only text that it can generate. Actually, there’s this online platform called Synthesia that lets you (yes, you) generate videos where someone speaks based on your input. It’s not just text or recorded speech; it’s an actual talking video.

    When Kara Swisher, a tech journalist, came out with her new book, Burn Book, there were reports of seemingly artificial intelligence-generated biographies of her suddenly coming up on Amazon. Swisher immediately responded, telling The New York Times’ Hard Fork podcast, “I sent (Amazon CEO) Andy Jassy a note and said, ‘What the f***?’ You’re costing me money.”

    Swisher was successful in getting the offending books removed from Amazon. But the problem of AI-generated scam books remains a widespread concern for authors. Most authors do not have the same direct contact with the CEO of Amazon via email.

    The problem is getting worse because AI can churn out these summaries quickly, flooding the market with low-quality, soulless content.

    Mary Rasenberger, CEO of the Authors Guild, a group advocating for writers, is reported as saying, “Scam books on Amazon have been a problem for years.” But she informs that the problem has multiplied in recent months.

    “Every new book seems to have some kind of companion book, some book that’s trying to steal sales.”

    It’s incredibly distressing, especially at a time when nations are officially considering AI as a potential threat to humanity.

    According to Lindsay Hamilton, a spokesperson for Amazon, the company has made changes regarding AI-generated content. They now require publishers using Kindle Direct Publishing to indicate if their content is AI-generated. Moreover, Amazon has put a limit on the number of titles that can be published in a day.

    Unfortunately, it’s still unclear how authors can legally fight back. While some argue that summaries are okay as long as they don’t directly copy the original text, others question whether these summaries are too similar to the original works. And within this context, there’s a separate faction deliberating on whether scraped data and articles should be permissible for training AI models.

    Authors and experts are calling on Amazon to take more prominent steps to stop these alleged scams and protect both authors and readers. For now, authors face the threat of their work being exploited, and without due caution, readers might find themselves purchasing subpar material. If you are an author and want to protect your work from AI, you may want to read the article below:

    Practical Tips for Authors to Protect Their Works from AI Use – The Authors Guild

  • AI could pose ‘extinction-level’ threat to humans, US state department reports

    AI could pose ‘extinction-level’ threat to humans, US state department reports

    Whenever artificial intelligence comes up in conversation, we often talk about the risks it poses, like the possibility of it being used as a weapon, the difficulty of controlling really advanced AI and the danger of cyberattacks powered by AI.

    The nightmares finally appear to be turning into reality: a new report commissioned by the US State Department has cautioned that failure to intervene in the advancement of AI technologies could result in ‘catastrophic consequences‘.

    Based on discussions and interviews with over 200 experts, including industry leaders, cybersecurity researchers, and national security officials, the report from Gladstone AI, released this week, highlights serious national security risks related to AI. It warns that advanced AI systems have the potential to cause widespread devastation that could threaten humanity’s existence.

    Jeremie Harris, CEO and co-founder of Gladstone AI, said the ‘unprecedented level of access’ his team had to officials in the public and private sector led to the startling conclusions. Gladstone AI said it spoke to technical and leadership teams from ChatGPT owner OpenAI, Google DeepMind, Facebook parent Meta and Anthropic.

    “Along the way, we learned some sobering things,” Harris said in a video posted on Gladstone AI’s website announcing the report.

    “Behind the scenes, the safety and security situation in advanced AI seems pretty inadequate relative to the national security risks that AI may introduce fairly soon.”

    The report has identified two main dangers of AI: the risk of it being used as harmful weapons and the concern of losing control over it, which could have serious security consequences.

    “Though its effects have not yet been catastrophic owing to the limited capabilities of current AI systems, advanced AI has already been used to design malware, interfere in elections, and execute impersonation attacks,” the report reads.

    AI is already making a big economic impact, according Harris. Harris said it could help us cure diseases, make new scientific discoveries, and tackle challenges we once thought were impossible to overcome.

    The downside? “But it could also bring serious risks, including catastrophic risks, that we need to be aware of,” Harris cautioned.

    “And a growing body of evidence – including empirical research and analysis published in the world’s top AI conferences – suggests that above a certain threshold of capability, AIs could potentially become uncontrollable.”

    The report’s release has led to calls for immediate action from policymakers. The White House has emphasized President Biden’s executive order on AI as a crucial step in managing its risks.

    White House spokesperson Robyn Patterson said President Joe Biden’s executive order on AI is the “most significant action any government in the world has taken to seize the promise and manage the risks of artificial intelligence.”

    “The President and Vice President will continue to work with our international partners and urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies,” Patterson said.

    However, experts agree that more strict measures are needed to address AI’s potential threats.

    About a year ago, Geoffrey Hinton, known as the “Godfather of AI,” left his job at Google and raised concerns about the AI technology he helped create. Hinton has suggested that there’s a 10% chance AI could lead to human extinction within the next 30 years.

    Hinton and many other leaders in the AI field, along with academics, signed a statement last June stating that preventing AI-related extinction risks should be a global priority.

    Despite pouring billions of dollars into AI investments, business leaders are increasingly worried about these risks. In a survey conducted at the Yale CEO Summit last year, 42% of CEOs said AI could potentially harm humanity within the next five to ten years.

    Gladstone AI’s report highlighted warnings from notable figures like Elon Musk, Lina Khan, Chair of the Federal Trade Commission, and a former top executive at OpenAI about the existential risks of AI. Some employees in AI companies share similar concerns privately, according to Gladstone AI.

    Gladstone AI also revealed that experts at cutting-edge AI labs were asked to share their personal estimates of the chance that an AI incident could cause “global and irreversible effects” in 2024. Estimates varied from 4% to as high as 20%, though the report noted these estimates were informal and likely biased.

    The speed of AI development, particularly Artificial General Intelligence (AGI), which could match or surpass human learning abilities, is a significant unknown. The report notes that AGI is seen as the main risk driver and mentions public statements from organizations like OpenAI, Google DeepMind, Anthropic, and Nvidia, suggesting AGI could be achieved by 2028, although some experts believe it’s much further away.

    Disagreements over AGI timelines make it challenging to create policies and safeguards. There’s a risk that if AI technology develops slower than expected, regulations could potentially be harmful.

    Gladstone AI has strictly warned that the development of AGI and similar capabilities “would introduce catastrophic risks unlike any the United States has ever faced.” This could include scenarios like AI-powered cyberattacks crippling critical infrastructure or disinformation campaigns destabilizing society.

    In addition, the report also raises concerns about weaponized robotic applications, psychological manipulation, and AI systems seeking power and becoming adversarial to humans. Advanced AI systems might even resist being turned off to achieve their goals, the report suggests.

    The report strongly suggests establishing a new AI agency and implementing emergency regulations to limit the development of overly capable AI systems. Moreover, it calls for stricter controls on the computational power used to train AI models to prevent misuse.

    Irrespective of governmental discourse, AI’s takeover occurs through various channels—see how, in 2024, text brings forth lifelike speaking avatars.

    Your least desired outcome at this moment: AI breaking free from the machine.

  • xAI will open-source Grok this week, Elon Musk says

    Elon Musk tweeted on Monday that xAI, his artificial-intelligence startup, will be sharing its powerful chatbot, Grok, with the public this week.

    This development comes after the billionaire’s recent legal battle with OpenAI. Earlier this month, Musk sued ChatGPT-maker OpenAI and its CEO Sam Altman, saying they had given up the startup’s original mission to develop artificial intelligence for the benefit of humanity and not for profit.

    Musk showed his interest in open-source AI during a podcast chat with computer scientist Lex Fridman in November. Shortly after, his company released the AI model to a small user base.

    In December, xAI launched Grok, its rival to ChatGPT, for Premium+ subscribers of the X social media platform. The Tesla CEO aims to create an AI focused on seeking maximum truth with Grok.

    By open-sourcing Grok, xAI joins a growing trend among companies, including Meta and Mistral, in sharing their AI technologies with the public.

    It’s the age of AI. Check out this online platform where you can transform text into expressive speaking avatars!

    Click to Create One

  • Authors sued Nvidia over AI use of copyrighted works

    Three authors have filed a lawsuit against Nvidia, alleging that the company used their copyrighted books without permission to train its NeMo AI platform.

    Nvidia used a dataset of approximately 196,640 books, which included their own works, to train NeMo in replicating everyday written language, according to claims by Brian Keene, Abdi Nazemian, and Stewart O’Nan.

    This dataset was removed in October due to copyright infringement concerns.

    In the alleged class action filed on Friday night in San Francisco federal court, the authors argue that by admitting to training NeMo on this dataset, Nvidia infringed upon their copyrights.

    They are seeking unspecified damages on behalf of individuals in the United States whose works were used to train NeMo over the past three years.

    Among works included in the lawsuit are Keene’s “Ghost Walk,” Nazemian’s “Like a Love Story,” and O’Nan’s “Last Night at the Lobster.”

    Nvidia has not yet publicly commented on the matter. And the authors’ legal representatives have yet to provide responses.

    This legal action adds Nvidia to the list of companies facing challenges over generative AI technology.

    The New York Times (NYT.N) also filed a lawsuit against OpenAI and Microsoft (MSFT.O) in December, alleging unauthorized use of millions of its articles. The lawsuit claims that the companies utilized the articles to train chatbots aimed at delivering information to users without proper permission.

    With the rise of AI, Nvidia remains a popular choice for investors because its stock price keeps rising significantly and it holds a large market value.

    [NVIDIA Corp: 875.28 USD, −51.41 (5.55%): Closed: Mar 8, 7:59 PM EST]

    If you’re curious about what AIs are capable of, check out this online platform where you can transform text into expressive speaking avatars!

    Click to Create One

  • MIT scientists enhance AI’s peripheral vision gaining insights from humans

    MIT scientists enhance AI’s peripheral vision gaining insights from humans

    Researchers at MIT have made a remarkable progress in equipping artificial intelligence (AI) with a form of peripheral vision similar to that of humans. The ability to detect objects outside the direct line of sight, albeit with less detail than that of humans, is a fundamental aspect of human vision. Now, scientists aim to replicate this capability in AI systems to enhance their understanding of visual scenes and potentially improve safety measures in various applications.

    For this, the MIT team, led by Anne Harrington, developed an innovative image dataset to help AI models simulate peripheral vision. Through training with this dataset, they observed improvements in object detection capabilities.

    A surprising finding was humans’ ability to detect objects in their periphery, outperforming AI models. Despite various attempts to enhance AI vision, including training from scratch and fine-tuning pre-existing models, machines consistently lagged behind human capabilities, particularly in detecting objects in the far periphery.

    “There is something fundamental going on here. We tested so many different models, and even when we train them, they get a little bit better but they are not quite like humans. So, the question is: What is missing in these models?” says Vasha DuTell, a co-author of a paper.

    This research points towards the complexities of human vision and the challenges in replicating it artificially. Harrington and her team plan to further research on these disparities, aiming to develop AI models that accurately predict human performance in peripheral vision tasks.

    “Modeling peripheral vision, if we can really capture the essence of what is represented in the periphery, can help us understand the features in a visual scene that make our eyes move to collect more information,” DuTell explains.

    Moreover, this work emphasizes the importance of interdisciplinary collaboration between neuroscience and AI research. By drawing insights from human vision mechanisms, scientists can refine AI systems to better mimic human perception, leading to more reliable applications.

    The team initially utilized a method called the texture tiling model, commonly employed in studying human peripheral vision, to enhance accuracy. They then tailored this model to offer greater flexibility in altering images without requiring prior knowledge of the observer’s focal point.

    “That let us faithfully model peripheral vision the same way it is being done in human vision research,” says Harrington.

    In brief, this research marks a significant progress in bridging the gap between human and artificial vision. Such advancements could revolutionize safety systems, particularly in contexts like driver assistance technology, where detecting hazards in the periphery is crucial.

    What’s even more interesting here is that, while neural network models have advanced, they still cannot match human performance in this area. This underscores the need for more AI research to gain insights from human vision neuroscience. The database of images provided by the authors will greatly aid in this future research.

    This work, set to be presented at the International Conference on Learning Representations, is supported, in part, by the Toyota Research Institute and the MIT CSAIL METEOR Fellowship.

    Have you ever tried this AI with a wonderful sense of humor? Visit this online platform where you can transform text into expressive speaking avatars!

    Click to Create One