Day: March 21, 2024

  • Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors

    Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors

    Artificial intelligence has presented remarkable opportunities to reduce mistakes, aid medical staff and offer patient services around the clock. As AI tools improve, there’s increasing potential to use them extensively in interpreting medical images, X-rays, and scans, diagnosing medical issues, and planning treatments. A new development has emerged in cancer detection: using AI in the National Health Service has shown how technology can find very small signs of breast cancer that doctors might miss.

    Mia, an AI tool tested with NHS doctors, looked at mammograms from over 10,000 women and found 11 cases of breast cancer that doctors hadn’t spotted. These cancers were caught very early, when they were hard to see, showing how AI can help find cancer sooner.

    Barbara was one of eleven patients who benefited from Mia’s advanced detection capabilities. Her case clearly demonstrates how AI can be instrumental in saving lives. Even though human radiologists didn’t catch it, Mia spotted Barbara’s 6mm tumor early on. This meant Barbara could get surgery quickly and only needed five days of radiotherapy. And, according to radiologist, patients with breast tumors smaller than 15mm usually have a good chance of survival, with a 90% rate over the next five years.

    Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors.
    Photo Credit: BBC

    BBC reported Barbara as saying that she was pleased the treatment was much less invasive than that of her sister and mother, who had previously also battled the disease.

    As Barbara had not experienced any noticeable symptoms, her cancer may not have been detected until her next routine mammogram three years later without the assistance of the AI tool.

    Mia and similar tools are expected to speed up the process of getting test results, potentially reducing the wait from 14 days to just three, as claimed by Kheiron, the developer. In the trial, Mia’s findings were always reviewed by humans. While currently, two radiologists examine each scan, there’s hope that eventually, one of them could be replaced by the AI tool, lightening the workload for each pair.

    Out of nearly 10,889 women in the trial, only 81 chose not to have their scans reviewed by the AI tool, according to Dr. Gerald Lip, the clinical director of breast screening in northwest Scotland who led the project.

    This shows that, AI tools, like Mia, are skilled at detecting symptoms of specific diseases if they’re trained on enough diverse data. This involves giving the program many different anonymized images of these symptoms from a wide range of people.

    Mia took six years to develop and train, according to Sarah Kerruish, Chief Strategy Officer of Kheiron Medical. It operates using cloud computing power from Microsoft and was trained on “millions” of mammograms sourced from “women all over the world”.

    Kerruish gave emphasis to the importance of inclusivity in developing AI for healthcare, reportedly saying, “I think the most important thing I’ve learned is that when you’re developing AI for a healthcare situation, you have to build in inclusivity from day one.

    But wait a moment! Let’s not overlook Mia’s imperfections. Sure, it’s a remarkable tool, but it’s not without its flaws. One major limitation is its lack of access to patients’ medical histories. This means it might flag cysts that were already deemed harmless in previous scans.

    In addition, Mia’s machine learning feature is disabled due to current health regulations that focus on the risks and biases of machine-learning algorithms at the level of input data, algorithm testing, and decision models. So, it can’t learn from its mistakes or improve over time. Each update requires a fresh review, adding to the workload.

    It’s also worthy to note that the Mia trial is just an initial test in one location. Although the University of Aberdeen validated the research independently, the results haven’t undergone peer review yet.

    Still, the Royal College of Radiologists acknowledges the potential of this technology. “These results are encouraging and help to highlight the exciting potential AI presents for diagnostics. There is no question that real-life clinical radiologists are essential and irreplaceable, but a clinical radiologist using insights from validated AI tools will increasingly be a formidable force in patient care,” said Dr Katharine Halliday, President of the Royal College of Radiologists.

    Dr. Julie Sharp, from Cancer Research UK, stresses the crucial role of technological innovation in healthcare, especially with the growing number of cancer cases.

    “More research will be needed to find the best ways to use this technology to improve outcomes for cancer patients,” she added.

    Furthermore, various other healthcare-related AI trials are underway across the UK. For example, Presymptom Health is developing an AI tool to analyze blood samples for early signs of sepsis before symptoms manifest.

    Mia has sparked hope among potential cancer patients; however, many of such trials are still in their infancy, awaiting published results.

  • Utilizing quantum entanglement for instantaneous, secure communication across vast distances?

    Utilizing quantum entanglement for instantaneous, secure communication across vast distances?

    Quantum mesh networking, an advanced frontier in communication technology, promises to revolutionize data transmission across vast distances.

    At its core is quantum entanglement, a phenomenon that defies conventional understanding and offers unprecedented opportunities for secure and instantaneous communication.

    Quantum entanglement, a fundamental aspect of quantum mechanics, describes the intrinsic connection between particles regardless of their separation in space. This concept challenges traditional notions of locality and opens the door to groundbreaking applications in networking.

    The appeal of quantum entanglement lies in its ability to enable instantaneous information transfer between entangled particles, regardless of the physical distance between them. This property forms the foundation of quantum mesh networking, facilitating ultra-fast and secure communication over long distances.

    Central to the realization of quantum mesh networking is the use of quantum bits, or qubits, as carriers of information. Unlike classical bits, which exist in either a 0 or 1 state, qubits can exist in a superposition of both states simultaneously. This vastly (or as I would say, unimaginably) increases the computational power and information storage capacity.

    In quantum mesh networks, nodes equipped with quantum processors serve as the basic units. These nodes are interconnected through quantum entanglement, forming a tough network architecture capable of withstanding disruptions; and ensuring:

    Instantaneous Communication

    One of the most remarkable features of quantum mesh networking is its ability to achieve instantaneous communication. Through the phenomenon of entanglement, information can be transmitted between entangled particles faster than the speed of light. This instantaneous transmission opens up possibilities for real-time communication across vast distances, revolutionizing the way we connect and collaborate.

    Secure Communication

    In addition to speed, quantum mesh networking offers unparalleled security. The entanglement of particles ensures that any attempt to intercept or eavesdrop on the communication would disrupt the entanglement, alerting the sender and receiver to potential security breaches. This phenomenon, known as quantum key distribution, provides a level of security that is theoretically unbreakable, even with advanced cryptographic techniques.

    Overcoming Distance Limitations

    Traditional communication methods are often limited by distance, with signal degradation occurring over long transmission paths. Quantum mesh networking transcends these limitations by leveraging entanglement to maintain coherence over vast distances. This enables seamless communication between nodes regardless of their geographical separation, making it an ideal solution for applications such as satellite communication and interplanetary exploration.


    A key aspect of quantum mesh networking is entanglement swapping, a process through which distant qubits become entangled indirectly via intermediary entangled particles, extending the reach of quantum communication beyond physical limitations.

    The conventional method of quantum mesh networking utilizes quantum key distribution (QKD) to ensure data security. QKD makes use of quantum randomness to generate cryptographic keys immune to eavesdropping. This, of course, guarantees end-to-end encryption and protecting sensitive information.

    However, the implementation of quantum mesh networks does face challenges. Quantum systems are susceptible to environmental noise and decoherence, which can degrade entangled states and compromise data integrity. Addressing these challenges would necessitate the development of advanced error correction techniques and robust quantum error correction codes. And that’s not the only concern. The second one is scalability, as the complexity of entanglement distribution grows exponentially with the number of network nodes. Overcoming this challenge, once again, demands innovative approaches, particularly to qubit storage, manipulation, and entanglement generation.

    Despite these obstacles, recent advancements in quantum technology have brought quantum mesh networking closer to reality. Experimental demonstrations of quantum entanglement over large distances, along with the development of quantum repeaters and entanglement purification protocols, signify progress toward practical implementation.

    Looking forward, the widespread adoption of quantum mesh networking holds some promise across various sectors. From secure data transmission to the realization of a quantum internet, the possibilities wide.

    You may also like:

  • Algorithm FeatUp can capture high and low-level resolutions simultaneously

    Algorithm FeatUp can capture high and low-level resolutions simultaneously

    Can you imagine a world where computer vision algorithms not only grasp the big picture but also capture every intricate detail with pixel-perfect accuracy? Due to a new algorithm called FeatUp developed by MIT researchers and published on March 15 on ArXiv, this vision is now a reality. The FeatUp algorithm can capture both high and low-level resolutions at the same time. Its remarkable ability for preserving even the tiniest details while also extracting important high-level features from visual data is unmatched.

    Traditional computer vision algorithms are good at understanding the big picture in images, but they struggle to keep all the small details, according to the researchers.

    “The essence of all computer vision lies in these deep, intelligent features that emerge from the depths of deep learning architectures. The big challenge of modern algorithms is that they reduce large images to very small grids of ‘smart’ features, gaining intelligent insights but losing the finer details,” says Mark Hamilton, an MIT PhD student in electrical engineering and computer science, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) affiliate, and a co-lead author on a paper about the project.

    FeatUp has now changed this by helping algorithms to see both the big picture and the small details at the same time. It’s like upgrading a computer’s vision to have sharp eyesight, similar to how Lasik eye surgery improves human vision.

    “FeatUp helps enable the best of both worlds: highly intelligent representations with the original image’s resolution. These high-resolution features significantly boost performance across a spectrum of computer vision tasks, from enhancing object detection and improving depth prediction to providing a deeper understanding of your network’s decision-making process through high-resolution analysis,” Hamilton says.

    As AI models become more common, there’s a growing need to understand how they work and what they’re focusing on. Hamilton says, FeatUp works by tweaking images slightly and observing how algorithms react.

    The FeatUp training architecture. FeatUp learns to upsample features through a consistency loss on low resolution “views” of a model’s features that arise from slight transformations of the input image. Description/Image Source: arXiv

    “We imagine that some high-resolution features exist, and that when we wiggle them and blur them, they will match all of the original, lower-resolution features from the wiggled images.”

    This process creates many slightly different deep-feature maps, which are then combined into a single clear and high-resolution set of features. According to Hamilton, the idea is to refine low-resolution features into high-resolution ones by essentially playing a matching game.

    “Our goal is to learn how to refine the low-resolution features into high-resolution features using this ‘game’ that lets us know how well we are doing.”

    This approach is similar to how algorithms build 3D models from multiple 2D images, ensuring the predicted 3D object matches all the 2D photos used. With FeatUp, the goal is to predict a high-resolution feature map that matches all the low-resolution feature maps created from variations of the original image.

    The team also developed a special layer for deep neural networks to make this process more efficient. This improvement benefits many algorithms, like those used for identifying objects in images.

    “Another application is something called small object retrieval, where our algorithm allows for precise localization of objects. For example, even in cluttered road scenes algorithms enriched with FeatUp can see tiny objects like traffic cones, reflectors, lights, and potholes where their low-resolution cousins fail. This demonstrates its capability to enhance coarse features into finely detailed signals,” says Stephanie Fu ’22, MNG ’23, a PhD student at the University of California at Berkeley and another co-lead author on the new FeatUp paper.

    FeatUp isn’t just useful for understanding algorithms; it also helps with practical tasks like spotting small objects in cluttered scenes, such as on busy roads. This could be crucial for things like self-driving cars.

    “This is especially critical for time-sensitive tasks, like pinpointing a traffic sign on a cluttered expressway in a driverless car. This can not only improve the accuracy of such tasks by turning broad guesses into exact localizations, but might also make these systems more reliable, interpretable, and trustworthy,” Hamilton explains.

    Moreover, FeatUp’s flexibility is evident as it smoothly integrates with existing deep learning setups without requiring extensive retraining. This allows researchers and professionals to easily employ FeatUp to enhance the accuracy and effectiveness of various computer vision tasks, such as object detection and semantic segmentation.

    For instance, if we use FeatUp before examining the predictions of a lung cancer detection algorithm using methods like class activation maps (CAM), we can get a much clearer (16-32 times) picture of where the tumor might be located according to the model.

    The team hopes FeatUp will become a standard tool in deep learning in the days to come, allowing models to see more details without slowing down. Experts also praise FeatUp for its simplicity and effectiveness, saying it could make a big difference in image analysis tasks.

    “FeatUp represents a wonderful advance towards making visual representations really useful, by producing them at full image resolutions,” says Cornell University computer science professor Noah Snavely, who was not involved in the research.

    The research team has planned to share their work at a conference in May.