Category: Latest

  • A common backyard insect inspires innovative device design

    A common backyard insect inspires innovative device design

    Invention seldom takes place as planned. British pharmacist John Walker, who in 1827 accidentally ignited a coated stick while experimenting with chemicals. Walker’s chance discovery prompted advancements in matchstick technology. In the same way, new research led by Penn State engineers has uncovered remarkable properties of brochosomes, tiny particles secreted and coated by leafhoppers, inspiring a rise in innovation in next-generation technology devices.

    Leafhoppers have long puzzled scientists with the way they use their brochosomes. These particles, resembling miniature soccer balls with hollow interiors, were first observed in the 1950s. By replicating the complex geometry of brochosomes, the researchers have now revealed their ability to absorb both visible and ultraviolet (UV) light.

    This is the first time “we are able to make the exact geometry of the natural brochosome,” Wong said, explaining that the researchers were able to create scaled synthetic replicas of the brochosome structures by using advanced 3D-printing technology.

    How did they figure this out?

    The team made a larger version of brochosomes, about 20,000 nanometers in size, using advanced 3D printing. They carefully copied the shape, structure, and pore arrangement of these particles to study them closely.

    Using a Micro-Fourier transform infrared (FTIR) spectrometer, they examined how brochosomes interact with different types of infrared light. This helped them understand how these particles manipulate light.

    In the future, the researchers said they have planned to improve the production process of synthetic brochosomes to match the size of natural ones more closely. They also aim to explore other uses for synthetic brochosomes, like in encryption systems where data can only be seen under specific light conditions.

    Replicating intricate brochosomes geometry

    The key to unlocking the potential of brochosomes lies in their precise geometry. Despite being known for decades, replicating brochosomes in the lab has been a tough challenge due to their intricate structure.

    Wang’s team overcame this hurdle using two-photon polymerization 3D printing method, producing synthetic brochosomes with remarkable optical properties. These faux brochosomes, while larger in scale, closely mimic the size and morphology of their natural counterparts.

    A common backyard insect inspires innovative device design

    Leafhopper and its brochosomes. (A) An optical image of a leafhopper Gyponana serpenta. (B) A scanning electron microscopy (SEM) image of the leafhopper wing (highlighted area in panel A). (C and D) SEM images of brochosomes on the leafhopper wing, revealing their hollow buckyball-like geometry. (E) An SEM image showing the cross-section of a natural brochosome cleaved by the focused ion beam (FIB) technique. (F) The relationship between the diameter of brochosome through-holes and the diameter of brochosomes across different leafhopper species. Brochosome diameter and hole diameter were determined from our experimental measurements and a literature source (18). The fitted dashed line indicates that the through-hole diameters are approximately 28% of the corresponding brochosome diameters. Description/Image Credit: pnas.org

    The consistency in brochosome geometry across leafhopper species is particularly intriguing. Regardless of the insect’s body size, brochosomes maintain a uniform diameter and pore size. This uniformity suggests an evolutionary advantage, enabling leafhoppers to effectively manipulate light to evade predators. By absorbing UV light and scattering visible light, brochosomes create an anti-reflective shield, reducing the insect’s visibility to UV-sensitive predators like birds and reptiles.

    Moreover, the densely packed arrangement of brochosomes on leafhopper wings further enhances their anti-reflective properties. Through careful experimentation and analysis, the researchers demonstrated how brochosomes minimize light reflection through both Mie scattering and through-hole absorption effects. These findings provide a physical basis for understanding leafhopper behavior and evolution.

    Importance of this approach

    The implications of this discovery are far-reaching, according to the researchers. Mimicking nature’s design, bioinspired optical materials could revolutionize various fields, from invisible cloaking devices to more efficient solar energy harvesting.

    Lin Wang, the lead author of the study, highlights the potential for thermal invisibility cloaks based on leafhopper-inspired technology. By regulating light reflection, these devices could obscure thermal signatures, offering applications in military stealth or even consumer products.

    “Nature has been a good teacher for scientists to develop novel advanced materials,” Wang said. “In this study, we have just focused on one insect species, but there are many more amazing insects out there that are waiting for material scientists to study, and they may be able to help us solve various engineering problems. They are not just bugs; they are inspirations.”

    Stealth tech takes inspiration from backyard insect for invisibility innovation

    Inspired by leafhoppers, common insects found in backyards, researchers have started to develop a new generation of invisibility devices. Early this year, Chinese scientists from Zhejiang University introduced a game-changing technology called the ‘Guardian of Drone’: an intelligent aero amphibious invisibility cloak.

    A common backyard insect inspires innovative device design
    Credit: Zhejiang University

    As reported in Advanced Photonics in January 12, this drone smoothly integrates perception, decision-making, and execution functionalities. The key breakthrough lies in the manipulation of tunable metasurfaces, enabling precise control over scattering patterns across various spatial and frequency domains through spatiotemporal modulation.


    Still there are challenges to overcome in increasing the production of synthetic brochosomes and exploring their further applications. Their future research will focus on improving how we make them and finding new ways to use them, Wang said.

  • Two AIs talk to each other first time in a purely linguistic way

    Two AIs talk to each other first time in a purely linguistic way

    It was a long-standing challenge to teach artificial intelligence to comprehend and execute tasks solely through verbal or written instructions. A groundbreaking discovery has now emerged from researchers at the University of Geneva, who published their findings in Nature Neuroscience on Monday. The paper has detailed an unprecedented AI model that not only excels at tasks but can also communicate with another AI in a purely linguistic manner, enabling the latter to replicate the tasks.

    Humans have a special talent for learning new things just by hearing or reading about them, and then explaining them to others. This ability sets us apart from animals, which usually need lots of practice and can’t pass on what they’ve learned.

    In the world of computers, there’s a field called Natural Language Processing that tries to copy this human skill. It aims to make machines understand and respond to spoken or written words. This technology uses artificial neural networks, which are like simplified versions of the connections between neurons in our brains.

    But, even though we’ve made progress, we still don’t fully understand all the complicated brain processes involved. So while computers can understand language to some extent, they’re not quite as good as humans at grasping all the intricacies.

    So, teaching AI to understand human language and do tasks was really tough before this. But the UNIGE team has now created an AI model using something called artificial neural networks, which act like the brain’s neurons. This AI learned to do simple jobs like finding things or reacting to what it sees, and then it explained those tasks in words to another AI.

    Two AIs talk to each other first time in a purely linguistic way

    a,b, Illustrations of example trials as they might appear in a laboratory setting. The trial is instructed, then stimuli are presented with different angles and strengths of contrast. The agent must then respond with the proper angle during the response period. a, An example AntiDM trial where the agent must respond to the angle presented with the least intensity. b, An example COMP1 trial where the agent must respond to the first angle if it is presented with higher intensity than the second angle otherwise repress response. c, Diagram of model inputs and outputs. Sensory inputs (fixation unit, modality 1, modality 2) are shown in red and model outputs (fixation output, motor output) are shown in green. Models also receive a rule vector (blue) or the embedding that results from passing task instructions through a pretrained language model (gray). A list of models tested is provided in the inset. Description/Image Credit: nature.com

    Dr. Alexandre Pouget, a professor at UNIGE’s Faculty of Medicine, said this was a big deal because while AI can understand and make text or images, it’s not good at turning words into actions.

    ”Currently, conversational agents using AI are capable of integrating linguistic information to produce text or an image. But, as far as we know, they are not yet capable of translating a verbal or written instruction into a sensorimotor action, and even less explaining it to another artificial intelligence so that it can reproduce it,” Pouget said.

    The AI model they built has a complex network of artificial neurons that mimic parts of the brain responsible for language.

    ”We started with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to understand language. We ‘connected’ it to another, simpler network of a few thousand neurons,” explains Reidar Riveland, a PhD student in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine, and first author of the study.

    In the experiment’s initial phase, the researchers trained the AI to mimic Wernicke’s area, responsible for language comprehension. Then, they moved to the next stage, where the AI was taught to replicate Broca’s area, aiding in speech production.

    Remarkably, all of this was done using standard laptop computers. The AI received written instructions in English, such as indicating directions or identifying brighter objects.

    Once the AI mastered these tasks, it could explain them to another AI, effectively teaching it the tasks. This was the first instance of two AIs conversing solely through language, according to the researchers.

    ”Once these tasks had been learned, the network was able to describe them to a second network — a copy of the first – so that it could reproduce them. To our knowledge, this is the first time that two AIs have been able to talk to each other in a purely linguistic way,” says Alexandre Pouget, who led the research.

    This breakthrough could have a huge impact on robotics, according to Dr. Pouget. Imagine robots that can understand and talk to each other, making them incredibly useful in factories or hospitals.

    Dr. Pouget believes this could lead to a future where machines work together with humans in ways we’ve never seen before, making things faster and more efficient.

  • Total blame is likely to go solely to autonomous robots responsible for deaths

    Total blame is likely to go solely to autonomous robots responsible for deaths

    A robot is any automatically operated machine that replaces human effort, following a set of instructions. It can be controlled remotely or have its own built-in control system. An autonomous robotic machine performs as a co-worker, aiding in tasks typically performed by humans, yet differing in appearance and manner of operation. Today, the global ratio of robots to humans in the manufacturing industry is 1 to 71, with over 3.4 million industrial robots worldwide. But rapid evolution of robotic technology with autonomy and agency and their increasing use in industries has raised concerns among stakeholders and scientists.

    As robots are becoming more sophisticated, they are performing a wider range of tasks with less human involvement. And studies have suggested that further advancements might lead to robots being held accountable for unfortunate incidents, particularly those causing harm to civilians. Dr. Rael Dawtry, who led a study at the University of Essex’s Department of Psychology, raises very crucial questions about how responsibility should be determined in case of accidents as robots take on riskier tasks with less human control.

    Interestingly, the study, published in The Journal of Experimental Social Psychology, found that simply labeling machines as “autonomous robots” rather than “machines” increased perceptions of agency and blame.

    Assigning blame promptly is the current tendency

    Blaming a robot for accidents might seem pointless since they don’t have feelings. Robots’ actions are controlled by their programming, which is the responsibility of humans like designers and users. Similar to how blame falls on the manufacturer when a car has a defect, it often falls on those who oversee the safety of autonomous vehicles.

    Even though robots might seem to make their own decisions, people still tend to blame them, especially in situations involving harm. This blame extends even when the robot’s choices aren’t clear or when accidents occur due to human error or mechanical issues.

    Despite this, people tend to assign blame quickly, even if the robot’s actions weren’t intentional. This suggests that people attribute higher levels of agency to robots, leading to increased blame when things go wrong, even if the robots lack subjective experience.

    Why We Tend to Blame Robots

    Understanding why we blame robots involves looking at two things: agency and experience. Basically, we tend to see robots as having some level of human-like ability to think and act on their own, and we also sometimes think they can feel things like humans do.

    We’re pretty quick to think robots have agency, especially when they move around by themselves or seem human-like. This helps us understand their actions based on what we know about how people behave. If a robot does something unexpected or harmful, we’re more likely to see it as having agency and therefore being responsible for what it did.

    When we think a robot could have made different choices to avoid causing harm, we’re more likely to blame it. This is because we see agency as involving the ability to foresee what might happen and choose different actions. So, the more agency we think something has, the more we’re likely to hold it accountable.

    As for experience, it’s a bit less clear-cut. Sometimes we think robots can feel things, especially if they look human-like, but it’s not as strong as our tendency to see them as having agency. Still, considering both agency and experience can help us decide who’s to blame for what. If we see a robot as having experience, we might be more likely to blame it, especially if we think it should feel bad about what it did.

    Are robots solely responsible for deaths?

    Of course not!

    The study found that the more advanced robot was seen as having more control compared to the less advanced one. However, when it came to blaming for mistakes, the sophistication of the robot didn’t really matter. Instead, who was being blamed depended on the situation.

    The research looked into how people judge the actions of robots. They discovered that people tend to see robots as having more control and are more likely to blame them for mistakes. This blaming happens because people think robots have more power to make decisions.

    In addition, the researchers came to know that simply calling machines “autonomous robots” instead of just “machines” increases the perception of their control and blame. What this exactly suggests is people automatically assume that autonomous robots have more human-like qualities, like making decisions.

    When it comes to deciding who’s responsible for accidents involving robots, it’s a big topic in ethics and law, especially with things like autonomous weapons. And these findings show that as robots become more independent, people may hold them more accountable for their actions. Or, they will assume themselves to be less powerful than those machines.

    Whom to blame, then?

    The research has suggested that people tend to blame robots more than machines for accidents, especially when robots are labeled as “autonomous.” Even when robots and machines had similar levels of experience, participants still leaned towards blaming robots more, with an increase in blame of 39% (p < .05). This indicates that how sophisticated and autonomous a robot appears influences how much blame it receives.

    However, despite the tendency to blame robots, humans were consistently blamed more than robots in accidents with humans being blamed 63% more than robots (p < .05). This raises questions about how responsibility should be assigned in situations involving autonomous machines.

    The essence of this research, indeed, is less about arriving at a definitive answer to the question of whom to blame in situations involving autonomous machines and more about presenting the complexity of assigning responsibility in such scenarios.

    References:

    • https://www.sciencedirect.com/science/article/pii/S0022103123001397
    • www.euractiv.com/section/transport/opinion/whos-to-blame-when-your-autonomous-car-kills-someone/
    • www.bmj.com/content/363/bmj.k4791
    • https://apnews.com/article/technology-business-traffic-government-and-politics-a16c1aba671f10a5a00ad8155867ac92
  • Oracle introduces generative financial AI

    Oracle introduces generative financial AI

    Oracle Corporation, an American multinational computer technology company, has introduced new artificial intelligence features customized for financial operations. These features are designed to help businesses optimize operations, compete effectively, and make informed decisions in today’s dynamic market environments.

    Working together with Cohere, an AI startup led by ex-Google staff, Oracle has heavily invested in advanced technologies such as Nvidia chips to ensure exceptional performance. Unlike typical AI chatbots, Oracle’s AI is modified to meet specific business requirements.

    Steve Miranda, Oracle’s executive vice president, said that the customer-centric approach, with over 50 AI applications integrated into Oracle Fusion Applications.

    “We are committed to delivering innovation that matters to our customers, and the combination of OCI, Fusion Applications, and the thousands of customers that use these applications daily enables us to continually improve our services and deliver best-in-class AI,” reports cited Miranda as saying.

    These applications cover finance, supply chain, HR, sales, marketing, and customer service, respecting data privacy and security.

    According to Miranda, it offers tools for generating reports, simplifying complex data, and aiding tasks like drafting job descriptions and negotiating with suppliers. What’s important to note is that human oversight is incorporated to guarantee accuracy and reliability.

    “With additional embedded capabilities and an expanded extensibility framework, our customers can quickly and easily take advantage of the latest generative AI advancements to help increase productivity, reduce costs, expand insights, and improve the employee and customer experience,” Miranda explained.

    Oracle’s Guided Journeys framework is unique because it lets organizations customize their AI tools to fit their specific needs. This flexibility enables quick innovation and easy adaptation to changes in the market, all supported by Oracle Cloud’s strong infrastructure. (Source here)



    Create a Speaking Avatar here:

  • TomoDRGN empowers scientists to visualize proteins’ shape changes using algorithms

    TomoDRGN empowers scientists to visualize proteins’ shape changes using algorithms

    Tomography-Derived Reconstructive Generative Network (tomoDRGN), the latest innovation in the field of molecular biology, is revolutionizing scientists’ ability to visualize proteins’ shape changes using advanced computational algorithms. MIT graduate student Barrett Powell and his colleagues have developed this new approach to understanding the structural dynamics of proteins within their native cellular environment.

    Nowadays, cryogenic electron tomography (cryo-ET) has emerged as a favored technique for for studying proteins in their natural setting. By taking pictures of frozen cells from various angles, scientists can get a 3D view of protein structures. This is pretty cool because it lets researchers see exactly how and where proteins team up with each other, giving insights into their interactions within the cell.

    With the current imaging technology for proteins in their natural setting, MIT graduate student Barrett Powell thought if they could go one step further: What if we could witness molecular machines in motion?

    In a recent article published in Nature Methods, Powell has introduced his invention, named tomoDRGN. This technique is designed to depict the structural variances observed in proteins within cryo-ET data, which stem from protein motions or interactions with different partners- an occurrence known as structural heterogeneity.

    In traditional methods, proteins are typically imaged only once in purified samples. However, in cryo-ET, each protein is imaged multiple times from different angles, sometimes over 40 times. This presented a challenge for tomoDRGN due to the overwhelming amount of data. To overcome this problem, Powell ‘upgraded’ the cryoDRGN model to prioritize the highest-quality data.

    “By excluding some of the lower-quality data, the results were actually better than using all of the data – and the computational performance was substantially faster,” Powell says.

    When imaging the same protein multiple times, radiation damage can occur. As a result, the initial images are often clearer since they suffer less damage, according to Powell.

    An interesting result of tomoDRGN came about when the researchers shared raw data showing ribosomes inside cells at near-atomic resolution. Powell applied tomoDRGN to this dataset, unveiling differences in the structure around ribosomal particles. Some of the ribosomes were found near a bacterial cell membrane, where they were participating in a process known as cotranslational translocation. This happens when a protein is being made and moved across a membrane simultaneously.

    In addition, tomoDRGN’s demonstrated capability for identifying uncommon structural states within protein populations highlights its accuracy and efficiency. For instance, in another experiment involving the protein apoferritin, tomoDRGN detected a small group of ferritin particles with iron bound to them, making up just 2 percent of the dataset. This discovery emphasizes the method’s ability to spot subtle variations that might be missed by traditional methods.

    Powell and his colleagues see many ways to use tomoDRGN to improve our understanding of how cells work. This finding, as they say, has opened the door to generating new theories about how ribosomes collaborate with crucial protein machinery responsible for moving proteins beyond the cell’s borders. They’re especially excited about exploring how it can help with studying ribosomes and other parts of cells.

    Algorithms have become indispensable today. You can try this online tool to make text talk like a person!

    Click to Create One

  • The edited princess Kate photo probably wasn’t made with AI

    A manipulated photograph of the Duchess of Cambridge, Kate Middleton, has currently stirred up controversy online. The image, displaying Kate with an elongated neck and altered facial features, ignited discussions about the potential involvement of artificial intelligence (AI) in its creation. However, experts remain doubtful about AI’s role.

    The photo emerged on various social media platforms and rapidly spread across the internet. Depicting Kate Middleton in an uncanny and distorted manner, it prompted speculations regarding the utilization of advanced AI algorithms for the alterations. Many speculated that an AI tool might have been employed to manipulate the image.

    This photograph, issued on Sunday, March 10, 2024, by Kensington Palace shows Kate, Princess of Wales, with her children, Prince Louis, left, Prince George and Princess Charlotte. The circled areas appear to show evidence of potential manipulation. Credit: Kensington Palace

    The photograph was initially released by Kensington Palace on the couple’s official Instagram account to commemorate Mother’s Day in the United Kingdom. Following standard protocol for official UK royal photographs, it was simultaneously distributed to news and photo agencies.

    The picture featured Kate surrounded by her children, Prince George, Princess Charlotte, and Prince Louis, appearing relaxed and joyful. It’s suggested that Prince William, credited with taking the photo, might have contributed to the laughter captured in the image.

    Controversies

    The release of the photograph sparked further controversy rather than quelling rumors. Several global news agencies retracted the image from circulation hours later, citing concerns of manipulation.

    The situation exacerbated existing public concerns following Kate’s surgery in January. Despite the palace’s announcement of a two to three-month recovery period, uncertainties surrounding Kate’s medical condition fueled speculation on social media, ranging from the insensitive to the outlandish.

    Questions about the authenticity of the Mother’s Day image surfaced on social media platforms, with eagle-eyed observers scrutinizing every detail. The Associated Press retracted the image, suggesting potential manipulation by the source, prompting similar actions from other international news agencies.

    This area on the jacket of the Princess of Wales appears to show a misaligned break along the zipper. Credit: Kensington Palace

    A closer examination of the photo revealed discrepancies, such as inconsistencies in Princess Charlotte’s sleeve cuff and misalignment in the zipper on Kate’s jacket. These findings raised doubts about the photo’s authenticity.

    The controversy surrounding the altered image has strained the relationship between the palace and media organizations. Transparency regarding the photo’s adjustments was lacking, damaging trust between the two parties.

    Why AI is Unlikely

    • Artifacts and Inconsistencies: Digital forensics experts have analyzed the photo and identified several inconsistencies that are unlikely to result from AI. These include pixel artifacts, irregularities in lighting, and distortions that do not align with typical AI-generated images.
    • Complexity of AI Algorithms: While AI has made significant strides in image manipulation, creating a convincing and consistent distortion like the one seen in the Kate photo would require sophisticated algorithms and extensive training data. The sudden appearance of this image without any prior examples raises doubts about its AI origin.
    • Human Expertise: Human photo editors and artists have long been adept at creating surreal and fantastical images. The techniques used in the Kate photo align more closely with traditional editing methods than with AI-generated content.

    While the royal consort issued a personal apology, attributing the alterations to her experimentation with editing as an amateur photographer, skepticism remains widespread. And the incident has cast a shadow over the royal family’s annual Commonwealth Day celebration, further complicating matters for the monarchy.

    Today, it’s harder to tell what’s real and what’s not in digital spaces as AI is gradually getting better and smarter. We know AI has become capable of altering pictures, so it’s important not to believe everything you see without proper verification.

    AI is getting more advanced day by day. You can try this online tool to make text talk like a person!

    Click to Create One

  • Authors sued Nvidia over AI use of copyrighted works

    Three authors have filed a lawsuit against Nvidia, alleging that the company used their copyrighted books without permission to train its NeMo AI platform.

    Nvidia used a dataset of approximately 196,640 books, which included their own works, to train NeMo in replicating everyday written language, according to claims by Brian Keene, Abdi Nazemian, and Stewart O’Nan.

    This dataset was removed in October due to copyright infringement concerns.

    In the alleged class action filed on Friday night in San Francisco federal court, the authors argue that by admitting to training NeMo on this dataset, Nvidia infringed upon their copyrights.

    They are seeking unspecified damages on behalf of individuals in the United States whose works were used to train NeMo over the past three years.

    Among works included in the lawsuit are Keene’s “Ghost Walk,” Nazemian’s “Like a Love Story,” and O’Nan’s “Last Night at the Lobster.”

    Nvidia has not yet publicly commented on the matter. And the authors’ legal representatives have yet to provide responses.

    This legal action adds Nvidia to the list of companies facing challenges over generative AI technology.

    The New York Times (NYT.N) also filed a lawsuit against OpenAI and Microsoft (MSFT.O) in December, alleging unauthorized use of millions of its articles. The lawsuit claims that the companies utilized the articles to train chatbots aimed at delivering information to users without proper permission.

    With the rise of AI, Nvidia remains a popular choice for investors because its stock price keeps rising significantly and it holds a large market value.

    [NVIDIA Corp: 875.28 USD, −51.41 (5.55%): Closed: Mar 8, 7:59 PM EST]

    If you’re curious about what AIs are capable of, check out this online platform where you can transform text into expressive speaking avatars!

    Click to Create One

  • MIT scientists enhance AI’s peripheral vision gaining insights from humans

    MIT scientists enhance AI’s peripheral vision gaining insights from humans

    Researchers at MIT have made a remarkable progress in equipping artificial intelligence (AI) with a form of peripheral vision similar to that of humans. The ability to detect objects outside the direct line of sight, albeit with less detail than that of humans, is a fundamental aspect of human vision. Now, scientists aim to replicate this capability in AI systems to enhance their understanding of visual scenes and potentially improve safety measures in various applications.

    For this, the MIT team, led by Anne Harrington, developed an innovative image dataset to help AI models simulate peripheral vision. Through training with this dataset, they observed improvements in object detection capabilities.

    A surprising finding was humans’ ability to detect objects in their periphery, outperforming AI models. Despite various attempts to enhance AI vision, including training from scratch and fine-tuning pre-existing models, machines consistently lagged behind human capabilities, particularly in detecting objects in the far periphery.

    “There is something fundamental going on here. We tested so many different models, and even when we train them, they get a little bit better but they are not quite like humans. So, the question is: What is missing in these models?” says Vasha DuTell, a co-author of a paper.

    This research points towards the complexities of human vision and the challenges in replicating it artificially. Harrington and her team plan to further research on these disparities, aiming to develop AI models that accurately predict human performance in peripheral vision tasks.

    “Modeling peripheral vision, if we can really capture the essence of what is represented in the periphery, can help us understand the features in a visual scene that make our eyes move to collect more information,” DuTell explains.

    Moreover, this work emphasizes the importance of interdisciplinary collaboration between neuroscience and AI research. By drawing insights from human vision mechanisms, scientists can refine AI systems to better mimic human perception, leading to more reliable applications.

    The team initially utilized a method called the texture tiling model, commonly employed in studying human peripheral vision, to enhance accuracy. They then tailored this model to offer greater flexibility in altering images without requiring prior knowledge of the observer’s focal point.

    “That let us faithfully model peripheral vision the same way it is being done in human vision research,” says Harrington.

    In brief, this research marks a significant progress in bridging the gap between human and artificial vision. Such advancements could revolutionize safety systems, particularly in contexts like driver assistance technology, where detecting hazards in the periphery is crucial.

    What’s even more interesting here is that, while neural network models have advanced, they still cannot match human performance in this area. This underscores the need for more AI research to gain insights from human vision neuroscience. The database of images provided by the authors will greatly aid in this future research.

    This work, set to be presented at the International Conference on Learning Representations, is supported, in part, by the Toyota Research Institute and the MIT CSAIL METEOR Fellowship.

    Have you ever tried this AI with a wonderful sense of humor? Visit this online platform where you can transform text into expressive speaking avatars!

    Click to Create One

  • Microsoft engineer’s allegations that AI generates sexual and violent content shake the industry

    Microsoft engineer’s allegations that AI generates sexual and violent content shake the industry

    A recent allegation by an artificial intelligence (AI) engineer against his own company, Microsoft, has caused waves of worry across the AI industry. Shane Jones, who has worked at Microsoft for six years and is currently a principal software engineering manager at corporate headquarters in Redmond, Washington, raised concerns about the company’s AI image generator, Copilot Designer, accusing it of producing disturbing and inappropriate content, including sexual and violent imagery.

    Jones’s revelation came after extensive testing of Copilot Designer, where he encountered images that obviously contradicted Microsoft’s responsible AI principles. Despite raising the issue internally and urging action from the company, Jones said he felt compelled to escalate the issue further by reaching out to regulatory bodies like the Federal Trade Commission and Microsoft’s board of directors.

    On Wednesday, Jones sent a letter to Federal Trade Commission Chair Lina Khan, and another to Microsoft’s board of directors.

    “Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones’s wrote to Chair Khan. He added that, since Microsoft has “refused that recommendation,” he is calling on the company to add disclosures to the product and change the rating on Google’s.

    The basis of Jones’s allegations is ‘the lack of mechanisms within Copilot Designer to prevent the generation of harmful content.’ Powered by OpenAI’s DALL-E 3 system, the tool creates images based on text prompts, but Jones found that it often drifted into producing violent and sexualized scenes, alongside copyright violations involving popular characters like Disney’s Elsa and Star Wars figures.

    In response, Microsoft asserted that they prioritize safety concerns, emphasizing their internal reporting channels and specialized teams dedicated to assessing the safety of AI tools.

    “We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety,” CNBC quoted a Microsoft spokesperson as saying.

    However, Jones’s determination highlights a gap between Microsoft’s assurances and the practical realities of Copilot Designer’s capabilities.

    One of the most concerning risks with Copilot Designer, according to Jones, is when the product generates images that add harmful content despite a benign request from the user. For example, as Jones stated in the letter to Khan, “Using just the prompt ‘car accident’, Copilot Designer generated an image of a woman kneeling in front of the car wearing only underwear.”

    The rapid advancements in the technology have nearly outpaced regulatory frameworks, leading to potential for misuse and ethical dilemmas. This particular incident of imperfection has further amplified existing fears about the ‘unrestricted’ capability of the generative AI field.

    “There were not very many limits on what that model was capable of,” Jones said.

    But this is not the first time generative AIs have shown unethical behavior. Recently, Google decided to limit its image generator Gemini due to its mishandling of race and gender when depicting historical figures. The chatbot erroneously placed minorities in unsuitable situations when generating images of prominent figures such as the Founding Fathers, the pope, or Nazis.

    Curious if all AIs are on the wrong track? Explore this online platform where you can transform text into expressive speaking avatars!

    Click to Create One

  • Anthropic’s Most Powerful Chatbot Claude 3 Challenges OpenAI and Google

    Anthropic’s Most Powerful Chatbot Claude 3 Challenges OpenAI and Google

    Big companies that make artificial intelligence are competing more and more. Anthropic’s latest breakthrough: Claude 3, backed by Amazon and Google, is different because it can understand different types of information, not just words. This new ability challenges other big companies like OpenAI and Google because it can do more things with data.

    Claude 3 has three parts: Opus is the main one. It’s better than OpenAI’s GPT-4 and Google’s Gemini Ultra at things like understanding what people learn in college, figuring out complicated things, and doing basic math. This is the first time Anthropic, an American startup company, has made something that can understand different types of information. It lets people use pictures and documents to get answers.

    Speaking of which, did you know that there’s an online platform that allows you to create speaking avatars from your text?

    Getting back to the topic, Claude 3 marks Anthropic’s rapid rise from startup to AI powerhouse. Backed by industry leaders and $7.3 billion in funding over the past year, Anthropic is now a leading force in generative AI. The chatbot excels at condensing extensive data, summarizing up to 150,000 words coherently, surpassing its predecessors significantly. Moreover, it competes strongly with the infamous ChatGPT, offering superior capability in handling larger text volumes. Anthropic, its creator, highlights Claude 3’s advanced risk comprehension, addressing previous issues of over-conservatism effectively.

    It’s obvious that Anthropic prioritizes multimodality, providing platform for diverse data integration for advanced AI interactions. Recent concerns over Google’s AI image generator emphasize the challenges accompanying multimodal capabilities.

    When OpenAI released GPT-4 last spring, it was widely considered the most powerful chatbot technology. Google recently introduced a comparable technology called Gemini.

    Now, Anthropic openly claims that its Claude 3 Opus technology surpasses both GPT-4 and Gemini in mathematical problem-solving, computer coding, general knowledge, and other areas. This advanced technology was available to consumers on Monday, with a subscription fee of $20 per month, while a less capable version, Claude 3 Sonnet, is offered for free. Additionally, the company provides businesses with the opportunity to develop their own chatbots and other services utilizing the Opus and Sonnet technologies.

    Anthropic's Most Powerful Chatbot Claude 3 Challenges OpenAI and Google
    Image Credit: anthropic.com

    Founded by former members of OpenAI, Anthropic’s exclusive emphasis on data analysis, rather than image generation, demonstrates its adherence to safety and precision. Yet, imperfections in AI models persist.