Category: Latest

  • Researchers use AI to make Belgian beer taste better

    Researchers use AI to make Belgian beer taste better

    Researchers at KU Leuven University have started a project to make Belgian beer even better using artificial intelligence. Led by Prof Kevin Verstrepen, they’re studying how we taste flavors to understand the different smells in beer.

    The researchers experimented with 250 Belgian beers of different kinds to see what makes them taste the way they do. They checked things like how much alcohol they have, how acidic they are, and what flavors are in them.

    “Tiny changes in the concentrations of chemicals can have a big impact, especially when multiple components start changing,” Prof. Kevin Verstrepen, of KU Leuven university, who led the research, said.

    Then they asked a group of 16 people to taste the beers and describe how they tasted. This took three years! At the same time, they collected 180,000 reviews of different beers from the online consumer review platform RateBeer and looked at what people said about these beers online.

    The researchers used all this information to make computer programs that can predict how a beer will taste based on what’s in it. They then used these predictions to make existing beers even better by adding certain flavors.

    “The AI models predict the chemical changes that could optimise a beer, but it is still up to brewers to make that happen starting from the recipe and brewing methods,” Prof. Verstrepen said.

    The results were amazing. People liked the beers even more, saying they tasted sweeter and fuller. But even with this new technology, Verstrepen reminds us that the skill of the brewers is still very important.

    AI’s changing a lot of things. But in the context of beer, it’s combining old traditions with new ideas to ensure Belgian beer stays as great as ever.

  • Engineering household robots practically incorporating a little common sense

    Engineering household robots practically incorporating a little common sense

    Household robots are getting better at doing lots of different jobs around the house, like cleaning up messes and serving food. They learn by copying what people do, but sometimes they struggle when things don’t go exactly as planned.

    At MIT, scientists are working on an innovative idea to help robots deal with these unexpected situations better. They’ve come up with a way to mix the robot’s movements with what a smart computer program knows, called a large language model. This helps robots understand how to break tasks down into smaller steps and handle problems without needing a human to fix everything.

    Engineering household robots practically incorporating a little common sense
    In this collaged image, a robotic hand tries to scoop up red marbles and put them into another bowl while a researcher’s hand frequently disrupts it. The robot eventually succeeds. Image Credit: Jose-Luis Olivares, MIT. Stills courtesy of the researchers

    Yanwei Wang, a student at MIT, says, “We’re teaching robots to fix mistakes on their own, which is a big deal for making them better at their jobs.”

    In a study they’re going to talk about at a conference, the MIT team shows how this idea works using a simple task: scooping marbles from one bowl to another. By using the smart computer program, they can figure out the best steps for the robot to take. Then, they teach the robot how to recognize these steps and adjust if things go wrong.

    Instead of stopping or starting over, the robot can now keep going even if something doesn’t go as planned. This means robots could become really good at doing tough jobs around the house, even when things get messy.

    Wang adds, “Our idea can make robots learn to do tricky tasks without needing humans to step in every time something goes wrong. It’s a big step forward for household robots.”

    What if they get as smart as us

    Engineering household robots practically incorporating a little common sense

    The idea of household robots getting as smart as humans could be a game-changer. Common sense, the ability to handle everyday situations wisely, is a big part of how humans think. If robots could do this too, it would certainly change a lot of things.

    It’s enough just to think of robots fitting right into our daily lives, understanding what’s going on, and making decisions like we do. They’d know what we need, adjust to new situations, and do things in ways that keep us safe and happy. This wouldn’t just make things run smoother, but it would also make us trust robots more.

    On the other hand, some big questions prevail. As robots become more like us, they start to feel less like tools and more like companions, don’t they? That means we’ll expect them to act ethically, just like we do. So, we’ll need to make sure they’re programmed to always put people first.

    Any threat from common-sense capable robots?

    Not in the near future, but the risk factor remains high. In fact, emotionally aware robots are just around the corner, according to Wang. He’s confident that these robots will soon be able to fix their mistakes on their own, without humans needing to step in. Wang is excited about the progress being made in training robots using teleoperation data. This data is crucial for their special algorithm, which turns it into advanced behaviors. This means robots can handle tough jobs with ease, even when things get tricky.

    And there’s this factor to consider: jobs. If robots take over a lot of tasks, some people might lose their jobs. That means we’ll need to rethink how we work and acquire new skills to adapt to a reality where robots are integrated into our daily lives.

    In addition, the sad incident at a vegetable packaging plant in Goseong, South Korea on November 8, 2023, has demonstrated how risky it can be to use industrial robots at work.

    Engineering household robots practically incorporating a little common sense
    This photo shows the interior of a vegetable packaging plant after a deadly incident involving a worker was reported in Goseong on Nov. 8, 2023. Description/Photo Credit: SOUTH KOREA GYEONGSANGNAM-DO FIRE DEPARTMENT VIA AP

    Even though the robot wasn’t reported as super smart with any kind of common sense, it accidentally hurt a worker who was checking on it. The event reminds us of the potential threats posed by such innovations and underscores the importance of enforcing strict safety rules when using AI or robots in the workplace.

    Referrences:

    https://news.mit.edu/topic/machine-learning
    https://techxplore.com/news/2024-03-household-robots-common.html
    https://www.sciencedaily.com/releases/2024/03/240325172439.htm
    https://openreview.net/forum?id=qoHeuRAcSl
    https://iclr.cc/
    https://www.messecongress.at/lage/?lang=en

  • Machine ‘unlearning’ helps filter out copyrighted, violent content

    Machine ‘unlearning’ helps filter out copyrighted, violent content

    Even in the world of machine learning, forgetting things is just as challenging as it is for us humans. Especially for artificial intelligence programs trying to act like humans, they face a tough time dealing with things like copyrighted stuff and sensitive content.

    Researchers at The University of Texas at Austin have developed a solution called “machine unlearning” to address the burning issue related to the rapidly growing use (and misuse) of AI. This method is designed specifically for image-based generative AI systems. It helps these systems get rid of copyrighted or violent images without forgetting everything else they’ve learned. They’ve explained their findings in a paper on the arXiv preprint server.

    Machine unlearning in practice

    Machine unlearning is a new way to deliberately forget certain data from a model to meet strict rules. But so far, it has mostly been used for certain types of models, leaving out others like generative ones. Companies working on self-driving cars use machine unlearning to get rid of old or irrelevant data. This helps them keep improving their driving algorithms and adjust to new situations.

    Hospitals and health tech companies use the system to keep patient information safe. For example, if a patient wants to take their data out of studies or databases, machine unlearning makes sure it’s completely removed, following privacy rules.

    Likewise, streaming services and online platforms use machine unlearning to update user profiles. This means they can change recommendations based on what users like right now, instead of using old information.

    And in setups where machine learning happens across different devices, like the ones used by mobile device makers and app developers, machine unlearning is used to delete data from devices. This way, when a user removes data from their device, it’s also taken out of the overall learning model without having to start over from scratch.

    Their experiments now show that their new method works well even without the data it is supposed to remember, which fits with rules about keeping data. According to the researchers, this is the first time anyone has really looked deeply into how to unlearn things from generative models that work with images, covering both theory and real-world tests.

    How machine unlearning works

    AI models are trained on large sets of data, some unwanted things gets in there too. In the past, the only choice was to start over and carefully take out the problematic stuff. But the researchers claim that their new method has offered a better solution.

    “When these models are trained on vast datasets, it’s inevitable that some undesirable data creeps in. Previously, the only recourse was to start over, painstakingly removing problematic content. Our approach offers a more nuanced solution,” Professor Radu Marculescu from the Cockrell School of Engineering’s Chandra Family Department of Electrical and Computer Engineering, a key figure in this endeavor, said.

    Generative AI relies a lot on internet data, which is huge but also full of copyrighted stuff, private info, and inappropriate content. This was evident in a recent legal fight between The New York Times and OpenAI over using articles without permission to train AI chatbots.

    According to Guihong Li, a graduate research assistant on the project, adding protections against copyright issues and misuse of content in generative AI models is crucial for their commercial success.

    The research has mainly looked at image-to-image models, which change input images based on context. The new machine unlearning algorithm allows these models to get rid of flagged content without needing to start all over again, with human oversight providing an extra layer of supervision.

    While machine unlearning has mostly been used in classification models, using it in generative models, especially for image processing, is a new area, as pointed out by the researchers in their paper.

  • Researchers introduce RAmBLA as a holistic approach to evaluating biomedical language models

    Researchers introduce RAmBLA as a holistic approach to evaluating biomedical language models

    In today’s advanced tech world, Large Language Models such as GPT-4 and LLaMA 2 lead the way in understanding complex medical terms. They give clear insights and provide accurate info based on evidence. These models are important for medical decisions, so it’s vital they’re reliable and precise. But as they’re used more in medicine, there’s a challenge: making sure they can handle tricky biomedical data without mistakes.

    To tackle this, we need a new way to evaluate them. Traditional methods focus on specific tasks, like spotting drug interactions, which isn’t enough for the wide-ranging needs of biomedical queries. Biomedical questions often involve pulling together lots of data and giving context-appropriate responses, so we need a more detailed evaluation.

    That’s where Reliability Assessment for Biomedical BLM Assistants comes in. Developed by researchers from Imperial College London and GSK.ai, RAmBLA aims to thoroughly check how dependable BLMs are in the biomedical field. It looks at important factors for real-world use, like handling different types of input, recalling info accurately, and giving responses that are correct and relevant. This all-around evaluation is a big step forward in making sure BLMs can be trusted helpers in biomedical research and healthcare, according to the researchers.

    “. . . we believe the aspects of LLM reliability highlighted in RAmBLA may serve as a useful starting point for developing applications for such use-cases,” the paper, authored by researchers including William James Bolton from Imperial College London, reads.

    RAmBLA framework emerges as a holistic approach to evaluating biomedical language models
    Screenshot Credit: arXiv.org

    What makes RAmBLA special is how it simulates real-life biomedical research situations to test BLMs. It gives them tasks that mimic the challenges of actual biomedical work, from understanding complex prompts to summarizing medical studies accurately. One key focus of RAmBLA’s testing is to reduce “hallucinations,” where models give believable but wrong info – a crucial thing to get right in medical settings.

    The study showed that bigger BLMs generally perform better across different tasks, especially in understanding similar meanings in biomedical questions. For example, GPT-4 scored an impressive 0.952 accuracy in answering open-ended biomedical questions. However, the study also found areas needing improvement, like reducing hallucinations and improving recall accuracy.

    “If they have insufficient knowledge or context information to answer a question, LLMs should refuse to answer,” the study report claims.

    Interestingly, the researchers discovered that bigger models were skilled at recognizing when to avoid answering irrelevant questions. On the other hand, smaller ones like Llama and Mistral struggled more, suggesting they require additional adjustments for better performance.

  • Smart tattoos that monitor health metrics and vital signs

    Smart tattoos that monitor health metrics and vital signs

    Tattoo technology has evolved beyond mere self-expression, as it is increasingly recognized as a reliable tool for tracking health metrics and vital signs.

    Smart tattoos are becoming more advanced in tracking health signs, studies show. MIT scientists have made ink for smart tattoos that contains tiny particles. These particles can detect changes in pH levels, which can signal issues like dehydration or metabolic disorders. Also, new flexible electronics allow for smart tattoos that monitor muscle activity, helping with physical performance and recovery. These developments highlight how smart tattoos could change healthcare by monitoring vital health signs easily and without being invasive.

    These tattoos, acting like miniature labs on the skin, offer real-time insights into important health indicators like blood pressure, glucose levels, and hydration. By integrating biosensors and special materials, they combine biomedical science with tattoo artistry to revolutionize health monitoring.

    Smart tattoos that monitor health metrics and vital signs
    At his Imperial College London lab, Ali Yetisen demonstrates a stamp on his arm created using tattoo ink that glows under certain light. Description/Photo Credit: CNN

    The idea of a “Lab on the skin” allows for painless monitoring of bodily functions without invasive medical devices. Pioneers like Ali Yetisen from Imperial College London have developed smart tattoos such as the DermalAbyss, which change color in response to changes in bodily fluids, providing continuous updates on pH levels, sodium, and glucose.

    Furthermore, smart tattoos can address external health factors like UV exposure, as shown in Carson Bruns’s research on “solar freckles,” potentially aiding in skin cancer prevention. They also offer promise in cancer treatment by providing a less intrusive alternative to traditional radiation markers.

    Smart tattoos that monitor health metrics and vital signs
    Carson Bruns (Photo Credit: colorado.edu)

    In 2020, Carson Bruns, an assistant professor of mechanical engineering at the University of Colorado Boulder, contributed to a team that created the “solar freckle,” a light-sensitive tattoo. It appears in sunlight, signaling excessive UV exposure, and fades when sunscreen is applied or when out of sunlight. Bruns has received a prestigious National Science Foundation CAREER Award for research that investigates how the art of tattooing can incorporate the latest advances in nanotechnology to improve human health.

    Smart tattoos have big potential applications, from personalized healthcare to space exploration. Human trials are ongoing, indicating progress toward regulatory approval and widespread use. Compared to wearable devices, smart tattoos offer unmatched convenience and permanence, along with being unhackable and not requiring batteries.

  • Thought-controlled smart homes for next-gen automation

    Thought-controlled smart homes for next-gen automation

    The human brain is amazing, with its complex network of nerves sending electrical signals that control everything we do and think. Lately, scientists and tech experts have come up with a really futuristic idea: smart homes that you can control just by using your thoughts. Imagine being able to turn on lights or change the thermostat without lifting a finger – just by thinking about it. This idea used to be something out of a sci-fi movie, but now it’s becoming real, all thanks to some really impressive advances in Brain-Computer Interface (BCI) technology.

    A new study, published by the Multidisciplinary Digital Publishing Institute on July 18, 2023, has introduced a novel method for automating smart homes using signals from the brain. Instead of using fancy gadgets, this method taps into how your brain works when you think about moving your hands. By picking up on these brain signals through electroencephalogram (EEG) readings, researchers have come up with a way to turn them into commands for controlling things in your home.

    How the BCI technology works

    Thought-controlled smart homes for next-gen automation
    Experimental setup: a personal computer connected to two devices through KNX, i.e., two light bulbs. Description/Image Credit: mdpi.com

    The BCI system uses EEG signals to automate smart homes. This system records EEG signals through harmless scalp electrodes, which are then made stronger and cleaned up. The signals are turned into digital data and prepared, including picking out important features using wavelet classifiers or Morphological Component Analysis (MCA) to spot blinking. Then, machine learning programs analyze these features to figure out what the user wants to do, turning it into commands that control household devices through a local server linked to the Internet of Things (IoT). This means even people with trouble moving can interact with their surroundings just by using their brain waves.

    This system mostly depends on motor imagery (MI) signals, where people imagine moving without actually moving their bodies. By using techniques called Regularized Common Spatial Pattern (RCSP) and Linear Discriminant Analysis (LDA), the EEG data linked to these imagined movements are studied and sorted. This allows users to control things just by thinking about them. Essentially, it creates a direct link between the brain and devices, bypassing the need for things like switches or remote controls.

    In the study, participants took part in motor imagery (MI) training sessions while their brain activity was recorded using the Emotiv EPOC X headset. They were strictly instructed to stay concentrated and attentive during the MI training session to ensure precise and trustworthy outcomes. They were then asked to imagine moving their left or right hand when shown visual cues on a screen. The brain signals recorded through EEG were then analyzed using various filtering and data processing techniques, including RCSP and LDA, to extract useful information.

    Devices talk to each other in smart homes

    Thought-controlled smart homes for next-gen automation
    EEG topographical distribution of subject A during the training phase. (a) Fixation cross 0 [s], (b) Arrow cue at 2.75 [s], (c) MI task starting at 4.25 [s], (d) MI task at 5.25 [s]. Description/Image Credit: mdpi.com

    One interesting aspect of this research is how it combines the Motor Imagery-based Brain-Computer Interface (MI-BCI) system with the KNX protocol, a standard way for devices to talk to each other in smart homes. This combination makes it easy to control things like lights using just your thoughts. The study also shows that it’s possible to control two devices at once and MI-BCI systems could make smart homes more accessible and efficient.

    MI-BCI systems pick up brain signals linked to imagined movements, which are then analyzed to understand what the user wants to do. The technical process involves capturing EEG signals, improving them, and extracting key features using algorithms like and LDA. These algorithms identify the imagined actions, which are then translated into commands to control devices.

    The KNX protocol, a widely used standard for both commercial and residential building automation, ensures smooth communication between different gadgets. With this setup, users can control multiple devices simultaneously, such as adjusting lights and temperature, just by thinking about it. In the study, the EMOTIV helmet was used to record EEG data, and OpenVibe was employed for signal processing. The results showed that the system effectively controlled two light bulbs, indicating its potential for handling more complex tasks in smart homes.

    Are thought-controlled smart homes a reality?

    Neuroheadset headset and its spatial configuration. (a) Emotiv EPOC X; (b) Electrodes configuration. Description/Image Credit: mdpi

    The results from the study show that the new method works well, with participants doing a good job during both practice and testing. The system works reliably in understanding what users want and carrying out their commands.

    While this research is a big step forward in thought-controlled smart home systems, this also means dealing with the changes in daily brain signals. This includes making BCI systems better at adapting to differences in EEG signals from person to person. These signals can vary because of things like stress, tiredness, or even our natural body rhythms. To tackle these challenges, researchers are looking into smart algorithms that can adjust to these changes, so the system works well all the time.

    Besides, making the training process easier is important for more people to use these systems. Right now, learning to use BCI systems takes a lot of time, which isn’t practical for many users. So, coming up with simpler ways to train people, maybe using machine learning to speed things up, is a big focus. This might mean making the setup process simpler or making training more fun with games.

    Controlling multiple devices at the same time is also a big goal. Although some progress has been made, it’s still tough for users to manage many appliances smoothly and accurately. Researchers say they are working on smarter BCI setups that can understand more complex commands, letting people control different devices in their smart homes all at once. Being able to control multiple devices together is key to making smart homes easy and efficient to use.

    Final words

    Indeed, there have already been several attempts in the development of thought-controlled smart homes. Take, for example, a project led by Eda Akman Aydin at Gazi University in Turkey. They built a system in 2015, enabling humans to use their thoughts to control various home devices like the TV, lights, and phone. This system also relied on an EEG cap to pick up brain signals called P300, which arise when someone intends to take action. These signals are then translated into commands that the smart home gadgets can carry out.

    We are now a decade ahead of Akman’s project. And the day when thought-controlled smart home systems come out of the lab and into real life could make life amazingly easier for everyone, from people with disabilities to regular users looking for more convenience. The potential uses are endless and only limited by our imagination.

  • Nvidia’s new text-to-3D model shows the pace of generative AI evolution

    Artificial intelligence is advancing very quickly, and Nvidia’s newest creation, the text-to-3D model LATTE3D, is a perfect example of this rapid progress in AI technology. Following the debut of its powerful Blackwell superchip, designed for training advanced AI models, at the NVIDIA GTC event held from March 18 to 21, Nvidia has introduced LATTE3D, a groundbreaking text-to-3D generative AI model.

    LATTE3D acts like a virtual 3D printer, transforming text prompts into detailed 3D objects and animals within seconds. What sets LATTE3D apart is its remarkable speed and quality. Unlike previous models, LATTE3D can generate intricate 3D shapes almost instantly on a single GPU, such as the NVIDIA RTX A6000, which was used for the NVIDIA Research demo.

    Credit: Nvidia

    This advancement means creators can now achieve near-real-time text-to-3D generation, revolutionizing how ideas are brought to life, according to Sanja Fidler, Nvidia’s vice president of AI research.

    “A year ago, it took an hour for AI models to generate 3D visuals of this quality – and the current state of the art is now around 10 to 12 seconds. We can now produce results an order of magnitude faster, putting near-real-time text-to-3D generation within reach for creators across industries,” ” Fidler said.

    The researchers said they trained LATTE3D on specific datasets like animals and everyday objects. But developers have the flexibility to use the same model architecture to train it on other types of data.

    For example, if LATTE3D is taught using a collection of 3D plant images, it could help a landscape designer by swiftly adding trees, bushes, and succulents to a digital garden design while working with a client. Likewise, if it’s trained on household items, the model could create objects to fill virtual home environments, assisting developers in preparing personal assistant robots for real-life tasks.

    To train LATTE3D, NVIDIA used its powerful A100 Tensor Core GPUs. Additionally, the model learned from a variety of text prompts generated by ChatGPT, enhancing its ability to understand different descriptions of 3D objects. For example, it can recognize that prompts about various dog breeds should all result in dog-like shapes.

    Nvidia's new text-to-3D model shows the pace of generative AI evolution
    3D dogs generated by the Nvidia LATTE3D AI model. Image Credit: Nvidia

    NVIDIA Research is comprised of hundreds of scientists and engineers worldwide, with teams dedicated to various fields including AI, computer graphics, computer vision, self-driving cars, and robotics.

    “This leap is huge. DreamFusion circa 2022 was slow and low quality, but kicked off this generative 3D revolution. Efforts like ATT3D (Amortized Text-to-3D Object Synthesis) chased speed at the cost of quality,” AI creator Bilawal Sidhu wrote on X (formerly Twitter).

  • S24 Ultra’s S-Pen Smell is Real (The Problem isn’t your nose)

    S24 Ultra’s S-Pen Smell is Real (The Problem isn’t your nose)

    If your Samsung galaxy S24 Ultra’s S Pen smells like a burnt plastic, you are not alone. The S-Pen dilemma has sparked numerous discussions in online community forums, with users expressing various concerns. Some even went as far as to fear that the house was burning, before learning the truth.

    Samsung EU’s moderator, AndrewL, who has been a member of the community since 2018, officially stated the following:

    This isn’t anything to be concerned about. While the S Pen is in its holster, it is close to the internal components of the phone, which will generate heat while in use, and cause the plastic to heat up. This can smell like burning, but it is similar to the smell you might experience after leaving your car in the sun for a few hours. The seats and plastic fittings in the vehicle might smell hot, but this will diminish after it cools. 

    The S Pen was promoted as offering “the magic of touch-free control.” But who knew that alongside its 0.7mm pen tip and 4096 pressure sensors, it would also come with a bonus fragrance experience straight out of a sci-fi flick?

    Now that the discussion regarding S24 Ultra’s pen has come up, numerous users have also taken note of similar olfactory experiences with previous iterations of the S Pen.

    Amidst any laughter and jest, it’s essential to recognize the potential serious implications of such incidents. A burning smell can often signal overheating or malfunctioning electronic components. Regardless of whether or not you have an S Pen, it’s important to watch out for that.

  • Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors

    Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors

    Artificial intelligence has presented remarkable opportunities to reduce mistakes, aid medical staff and offer patient services around the clock. As AI tools improve, there’s increasing potential to use them extensively in interpreting medical images, X-rays, and scans, diagnosing medical issues, and planning treatments. A new development has emerged in cancer detection: using AI in the National Health Service has shown how technology can find very small signs of breast cancer that doctors might miss.

    Mia, an AI tool tested with NHS doctors, looked at mammograms from over 10,000 women and found 11 cases of breast cancer that doctors hadn’t spotted. These cancers were caught very early, when they were hard to see, showing how AI can help find cancer sooner.

    Barbara was one of eleven patients who benefited from Mia’s advanced detection capabilities. Her case clearly demonstrates how AI can be instrumental in saving lives. Even though human radiologists didn’t catch it, Mia spotted Barbara’s 6mm tumor early on. This meant Barbara could get surgery quickly and only needed five days of radiotherapy. And, according to radiologist, patients with breast tumors smaller than 15mm usually have a good chance of survival, with a 90% rate over the next five years.

    Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors.
    Photo Credit: BBC

    BBC reported Barbara as saying that she was pleased the treatment was much less invasive than that of her sister and mother, who had previously also battled the disease.

    As Barbara had not experienced any noticeable symptoms, her cancer may not have been detected until her next routine mammogram three years later without the assistance of the AI tool.

    Mia and similar tools are expected to speed up the process of getting test results, potentially reducing the wait from 14 days to just three, as claimed by Kheiron, the developer. In the trial, Mia’s findings were always reviewed by humans. While currently, two radiologists examine each scan, there’s hope that eventually, one of them could be replaced by the AI tool, lightening the workload for each pair.

    Out of nearly 10,889 women in the trial, only 81 chose not to have their scans reviewed by the AI tool, according to Dr. Gerald Lip, the clinical director of breast screening in northwest Scotland who led the project.

    This shows that, AI tools, like Mia, are skilled at detecting symptoms of specific diseases if they’re trained on enough diverse data. This involves giving the program many different anonymized images of these symptoms from a wide range of people.

    Mia took six years to develop and train, according to Sarah Kerruish, Chief Strategy Officer of Kheiron Medical. It operates using cloud computing power from Microsoft and was trained on “millions” of mammograms sourced from “women all over the world”.

    Kerruish gave emphasis to the importance of inclusivity in developing AI for healthcare, reportedly saying, “I think the most important thing I’ve learned is that when you’re developing AI for a healthcare situation, you have to build in inclusivity from day one.

    But wait a moment! Let’s not overlook Mia’s imperfections. Sure, it’s a remarkable tool, but it’s not without its flaws. One major limitation is its lack of access to patients’ medical histories. This means it might flag cysts that were already deemed harmless in previous scans.

    In addition, Mia’s machine learning feature is disabled due to current health regulations that focus on the risks and biases of machine-learning algorithms at the level of input data, algorithm testing, and decision models. So, it can’t learn from its mistakes or improve over time. Each update requires a fresh review, adding to the workload.

    It’s also worthy to note that the Mia trial is just an initial test in one location. Although the University of Aberdeen validated the research independently, the results haven’t undergone peer review yet.

    Still, the Royal College of Radiologists acknowledges the potential of this technology. “These results are encouraging and help to highlight the exciting potential AI presents for diagnostics. There is no question that real-life clinical radiologists are essential and irreplaceable, but a clinical radiologist using insights from validated AI tools will increasingly be a formidable force in patient care,” said Dr Katharine Halliday, President of the Royal College of Radiologists.

    Dr. Julie Sharp, from Cancer Research UK, stresses the crucial role of technological innovation in healthcare, especially with the growing number of cancer cases.

    “More research will be needed to find the best ways to use this technology to improve outcomes for cancer patients,” she added.

    Furthermore, various other healthcare-related AI trials are underway across the UK. For example, Presymptom Health is developing an AI tool to analyze blood samples for early signs of sepsis before symptoms manifest.

    Mia has sparked hope among potential cancer patients; however, many of such trials are still in their infancy, awaiting published results.

  • Algorithm FeatUp can capture high and low-level resolutions simultaneously

    Algorithm FeatUp can capture high and low-level resolutions simultaneously

    Can you imagine a world where computer vision algorithms not only grasp the big picture but also capture every intricate detail with pixel-perfect accuracy? Due to a new algorithm called FeatUp developed by MIT researchers and published on March 15 on ArXiv, this vision is now a reality. The FeatUp algorithm can capture both high and low-level resolutions at the same time. Its remarkable ability for preserving even the tiniest details while also extracting important high-level features from visual data is unmatched.

    Traditional computer vision algorithms are good at understanding the big picture in images, but they struggle to keep all the small details, according to the researchers.

    “The essence of all computer vision lies in these deep, intelligent features that emerge from the depths of deep learning architectures. The big challenge of modern algorithms is that they reduce large images to very small grids of ‘smart’ features, gaining intelligent insights but losing the finer details,” says Mark Hamilton, an MIT PhD student in electrical engineering and computer science, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) affiliate, and a co-lead author on a paper about the project.

    FeatUp has now changed this by helping algorithms to see both the big picture and the small details at the same time. It’s like upgrading a computer’s vision to have sharp eyesight, similar to how Lasik eye surgery improves human vision.

    “FeatUp helps enable the best of both worlds: highly intelligent representations with the original image’s resolution. These high-resolution features significantly boost performance across a spectrum of computer vision tasks, from enhancing object detection and improving depth prediction to providing a deeper understanding of your network’s decision-making process through high-resolution analysis,” Hamilton says.

    As AI models become more common, there’s a growing need to understand how they work and what they’re focusing on. Hamilton says, FeatUp works by tweaking images slightly and observing how algorithms react.

    The FeatUp training architecture. FeatUp learns to upsample features through a consistency loss on low resolution “views” of a model’s features that arise from slight transformations of the input image. Description/Image Source: arXiv

    “We imagine that some high-resolution features exist, and that when we wiggle them and blur them, they will match all of the original, lower-resolution features from the wiggled images.”

    This process creates many slightly different deep-feature maps, which are then combined into a single clear and high-resolution set of features. According to Hamilton, the idea is to refine low-resolution features into high-resolution ones by essentially playing a matching game.

    “Our goal is to learn how to refine the low-resolution features into high-resolution features using this ‘game’ that lets us know how well we are doing.”

    This approach is similar to how algorithms build 3D models from multiple 2D images, ensuring the predicted 3D object matches all the 2D photos used. With FeatUp, the goal is to predict a high-resolution feature map that matches all the low-resolution feature maps created from variations of the original image.

    The team also developed a special layer for deep neural networks to make this process more efficient. This improvement benefits many algorithms, like those used for identifying objects in images.

    “Another application is something called small object retrieval, where our algorithm allows for precise localization of objects. For example, even in cluttered road scenes algorithms enriched with FeatUp can see tiny objects like traffic cones, reflectors, lights, and potholes where their low-resolution cousins fail. This demonstrates its capability to enhance coarse features into finely detailed signals,” says Stephanie Fu ’22, MNG ’23, a PhD student at the University of California at Berkeley and another co-lead author on the new FeatUp paper.

    FeatUp isn’t just useful for understanding algorithms; it also helps with practical tasks like spotting small objects in cluttered scenes, such as on busy roads. This could be crucial for things like self-driving cars.

    “This is especially critical for time-sensitive tasks, like pinpointing a traffic sign on a cluttered expressway in a driverless car. This can not only improve the accuracy of such tasks by turning broad guesses into exact localizations, but might also make these systems more reliable, interpretable, and trustworthy,” Hamilton explains.

    Moreover, FeatUp’s flexibility is evident as it smoothly integrates with existing deep learning setups without requiring extensive retraining. This allows researchers and professionals to easily employ FeatUp to enhance the accuracy and effectiveness of various computer vision tasks, such as object detection and semantic segmentation.

    For instance, if we use FeatUp before examining the predictions of a lung cancer detection algorithm using methods like class activation maps (CAM), we can get a much clearer (16-32 times) picture of where the tumor might be located according to the model.

    The team hopes FeatUp will become a standard tool in deep learning in the days to come, allowing models to see more details without slowing down. Experts also praise FeatUp for its simplicity and effectiveness, saying it could make a big difference in image analysis tasks.

    “FeatUp represents a wonderful advance towards making visual representations really useful, by producing them at full image resolutions,” says Cornell University computer science professor Noah Snavely, who was not involved in the research.

    The research team has planned to share their work at a conference in May.