Year: 2024

  • Nvidia’s new text-to-3D model shows the pace of generative AI evolution

    Artificial intelligence is advancing very quickly, and Nvidia’s newest creation, the text-to-3D model LATTE3D, is a perfect example of this rapid progress in AI technology. Following the debut of its powerful Blackwell superchip, designed for training advanced AI models, at the NVIDIA GTC event held from March 18 to 21, Nvidia has introduced LATTE3D, a groundbreaking text-to-3D generative AI model.

    LATTE3D acts like a virtual 3D printer, transforming text prompts into detailed 3D objects and animals within seconds. What sets LATTE3D apart is its remarkable speed and quality. Unlike previous models, LATTE3D can generate intricate 3D shapes almost instantly on a single GPU, such as the NVIDIA RTX A6000, which was used for the NVIDIA Research demo.

    Credit: Nvidia

    This advancement means creators can now achieve near-real-time text-to-3D generation, revolutionizing how ideas are brought to life, according to Sanja Fidler, Nvidia’s vice president of AI research.

    “A year ago, it took an hour for AI models to generate 3D visuals of this quality – and the current state of the art is now around 10 to 12 seconds. We can now produce results an order of magnitude faster, putting near-real-time text-to-3D generation within reach for creators across industries,” ” Fidler said.

    The researchers said they trained LATTE3D on specific datasets like animals and everyday objects. But developers have the flexibility to use the same model architecture to train it on other types of data.

    For example, if LATTE3D is taught using a collection of 3D plant images, it could help a landscape designer by swiftly adding trees, bushes, and succulents to a digital garden design while working with a client. Likewise, if it’s trained on household items, the model could create objects to fill virtual home environments, assisting developers in preparing personal assistant robots for real-life tasks.

    To train LATTE3D, NVIDIA used its powerful A100 Tensor Core GPUs. Additionally, the model learned from a variety of text prompts generated by ChatGPT, enhancing its ability to understand different descriptions of 3D objects. For example, it can recognize that prompts about various dog breeds should all result in dog-like shapes.

    Nvidia's new text-to-3D model shows the pace of generative AI evolution
    3D dogs generated by the Nvidia LATTE3D AI model. Image Credit: Nvidia

    NVIDIA Research is comprised of hundreds of scientists and engineers worldwide, with teams dedicated to various fields including AI, computer graphics, computer vision, self-driving cars, and robotics.

    “This leap is huge. DreamFusion circa 2022 was slow and low quality, but kicked off this generative 3D revolution. Efforts like ATT3D (Amortized Text-to-3D Object Synthesis) chased speed at the cost of quality,” AI creator Bilawal Sidhu wrote on X (formerly Twitter).

  • S24 Ultra’s S-Pen Smell is Real (The Problem isn’t your nose)

    S24 Ultra’s S-Pen Smell is Real (The Problem isn’t your nose)

    If your Samsung galaxy S24 Ultra’s S Pen smells like a burnt plastic, you are not alone. The S-Pen dilemma has sparked numerous discussions in online community forums, with users expressing various concerns. Some even went as far as to fear that the house was burning, before learning the truth.

    Samsung EU’s moderator, AndrewL, who has been a member of the community since 2018, officially stated the following:

    This isn’t anything to be concerned about. While the S Pen is in its holster, it is close to the internal components of the phone, which will generate heat while in use, and cause the plastic to heat up. This can smell like burning, but it is similar to the smell you might experience after leaving your car in the sun for a few hours. The seats and plastic fittings in the vehicle might smell hot, but this will diminish after it cools. 

    The S Pen was promoted as offering “the magic of touch-free control.” But who knew that alongside its 0.7mm pen tip and 4096 pressure sensors, it would also come with a bonus fragrance experience straight out of a sci-fi flick?

    Now that the discussion regarding S24 Ultra’s pen has come up, numerous users have also taken note of similar olfactory experiences with previous iterations of the S Pen.

    Amidst any laughter and jest, it’s essential to recognize the potential serious implications of such incidents. A burning smell can often signal overheating or malfunctioning electronic components. Regardless of whether or not you have an S Pen, it’s important to watch out for that.

  • Why the First Commercial Flying Car Must be Self-Driving

    Why the First Commercial Flying Car Must be Self-Driving

    In the 1980s, flying cars and robots driving vehicles were two of the most popular ambitions. These futuristic fantasies ensnared the collective imagination and fueled dreams of a world where technology integrated into everyday life, with sky being the limit. Fast forward to the present day, and while we may not have flying cars buzzing overhead or fully autonomous vehicles dominating the roads just yet, significant strides have been made in both arenas.

    While flying cars have long been a reality (beginning in the 1950s), the prospect of a commercially available flying car has always seemed too challenging. However, as we inch closer to realizing this dream, it’s imperative to consider the implications of introducing such technology into our transportation ecosystem. In particular, the question arises: shouldn’t the first commercial flying car be self-driving?

    At first glance, the idea of a self-driving flying car may seem like a natural progression in our expedition for convenience and efficiency. After all, autonomous technology has already begun to revolutionize traditional ground transportation and promise increased safety and reduced congestion.

    One of the main and somewhat debated reasons supporting self-driving flying cars is safety, with opinions differing among people. Some argue that autonomous systems, which are not influenced by human error or bias, could significantly decrease the chances of accidents in the sky. We’ll get deeper into this argument as the article progresses.

    The integration of self-driving technology could also democratize access to flying cars, making them more accessible to a wider range of consumers. Eliminating the need for specialized piloting skills, autonomous flying cars could become as ubiquitous as their ground-bound counterparts.

    Surprising as it may seem, though, up to 75% of people, as indicated by recent studies, incline towards driving their own vehicles rather than opting for the autonomous alternative.

    Now, unlike terrestrial vehicles, which operate within well-defined roadways and traffic patterns, flying cars would navigate a vastly more complex and unpredictable environment. Airspace is governed by a multitude of regulations, air traffic control protocols, and safety procedures, all of which would need to be integrated into autonomous systems.

    The consequences of failure in an airborne vehicle are inherently more severe than those on the ground. A malfunction or programming error could have fatal implications not only for the occupants of the flying car but also for those on the ground below. The stakes are undeniably higher when operating in three-dimensional space, requiring a level of reliability and redundancy that far exceeds current automotive standards.

    But again, looking from another, arguably more sensible angle, as flying cars operate in a complex and potentially hazardous environment, necessitating precise navigation and rapid decision-making would actually be key. Self-driving technology offers the promise of enhanced safety by mitigating human error and providing easy integration with existing aviation infrastructure.

    Now, the reason we’re advocating for the first commercial flying car to be self-driving is considering the necessity of a smooth transition from road traffic to air traffic. This also applies to other transitions, like the transition from human to robot workers in the workforce. In case of flying cars, autonomous technology ensures a smoother and more structured integration into the skies. With emotionless AI at the helm, everything becomes more balanced and structured. The potential for human error all but vanishes (except for ones in programming). With AI, air traffic will be super organized and balanced, you know, civilized.

    Despite the inherent challenges, the case for self-driving flying cars remains compelling, albeit with some caveats. Of course, the technology is probably not ready for widespread deployment today. However, ongoing progress in artificial intelligence-driven sensor technology and aviation systems may soon bridge the gap towards the debut of the first commercially available autonomous flying vehicle.

  • Reality Editing is More than VR or AR, Especially with AI

    Reality Editing is More than VR or AR, Especially with AI

    Reality editing is essentially a combination of a spectrum of technologies and approaches aimed at altering or enhancing human perception of reality. Reality editing is more than VR (a headset), AR (a headset) and even mixed reality, which is the visual blend of virtual and real worlds. While VR and AR dominate discussions, these technologies represent only a fraction of what reality editing constitutes. In recent years, especially with the initiation of AI era with the rise of Generative AI, research and development efforts have increasingly focused on synergizing AI with reality editing technologies.

    Conventional VR and AR

    VR typically involves wearing a head-mounted display to enter a fully immersive virtual environment, while AR overlays digital content onto the user’s view of the real world using devices like smartphones or smart glasses. These technologies have found real-world applications in education, architecture, and many more industries; even healthcare.

    Non-VR aspects and future additions to reality editing

    Here is a basic overview of non-VR facets of reality editing:

    Auditory Reality Editing: Sound plays a crucial role in shaping our perception of reality. Consider noise-canceling headphones—they edit out unwanted sounds, creating a personalized auditory environment. Future applications could involve enhancing natural sounds or even introducing entirely new auditory layers.

    Haptic Reality Editing: Haptic feedback is already a well-established VR integration. Our sense of touch profoundly influences how we perceive the world. Haptic feedback in VR controllers or wearables simulates physical sensations. You can feel the texture of a virtual sculpture or sense the warmth of a digital fireplace.

    Temporal Reality Editing: Time manipulation is a powerful tool. Think about rewinding a video or fast-forwarding through a lecture. In reality editing, we could alter the perception of time. You could relive cherished moments, and on the other hand, compress hours into minutes during a tedious task.

    Emotional Reality Editing: Emotions color our reality. Can we edit emotions? Perhaps. Future technologies might allow us to adjust emotional states. Imagine dialing down anxiety or enhancing feelings of joy.

    AI in Reality Editing

    The integration of AI into reality editing introduces capabilities beyond VR and AR experiences. AI algorithms can analyze user behavior, adapt content in real-time, generate dynamic narratives, and enhance sensory feedback, thereby creating more engaging and realistic virtual environments.

    Recent Advancements in AI-Enhanced Reality Editing

    Reality editing

    AI-Driven Content Generation: Recent research has focused on large language models, such as OpenAI’s GPT-4, to generate lifelike narratives and dialogue within virtual environments. In fact, there are intelligent NPCs in existence already, like NVIDIA ACE’s characters, Jin and Nova, who recently talked to each other about their digital reality possibly being an “elaborate cybernetic dream”; all based on NVIDIA’s Nemo LLM, generating a new conversation each time. Such AI systems can understand and respond to user input, allowing for more interactive storytelling experiences in VR.

    Enhanced Sensory Feedback: AI-powered haptic technology enable more realistic touch sensations in VR environments. The most recent haptic breakthrough, published on Nature Electronics, is this skin-integrated multimodal haptic interface for immersive tactile feedback. By integrating AI algorithms with haptic devices, developers can simulate textures, forces, and vibrations, enhancing the sense of presence and immersion for users.

    Neuroadaptive Interfaces: Research into brain-computer interfaces (BCIs) aims to directly interpret neural signals and translate them into actions within virtual environments. BCIs offer the potential for lifelike interaction and control in VR and AR applications by bypassing traditional input devices.

    Emotion Recognition: AI algorithms can analyze facial expressions, voice intonations, and physiological signals to infer users’ emotions in real-time. With emotion recognition capabilities, developers can customize user experiences, In fact, even to evoke specific emotional responses and enhance user engagement.

    Real-time Adaptation: AI algorithms are being developed to analyze user interactions and adapt virtual scenarios in real-time by tracking user behavior and preferences. This is already being employed in digital conscious characters in “The Matrix awakens“.

    Dynamic Object Interactions: Reinforcement learning algorithms allow virtual agents and objects to display behaviors that feel more natural and react intelligently to user input. This makes experiences that are not only more immersive but also more interactive achievable.

    Cross-reality Collaboration: AI allows for the collaboration between virtual and physical spaces, enabling applications such as mixed reality and remote assistance. Integrating AI-powered communication and interaction tools, users can interact with virtual objects and remote participants as if they were physically present, like in platforms such as Nvidia’s Omniverse.

    Future Directions

    The convergence of AI with reality editing is expected to drive further innovation and transformation across various industries. Future research directions may include:

    • Advancing AI algorithms for more sophisticated content generation and interaction in virtual environments.
    • Exploring new modalities for immersive sensory feedback, such as olfactory and gustatory stimuli.
    • Enhancing AI-powered virtual assistants and agents to provide personalized guidance and support within VR and AR applications.
    • Investigating the potential of AI-driven predictive analytics to anticipate user preferences and adapt virtual experiences proactively.
  • DNA Data Storage in the Era of Molecular Advancements

    DNA Data Storage in the Era of Molecular Advancements

    Information production has been growing exponentially in our data-driven era. However, the installed base of mainstream storage technologies—such as magnetic drives, optical disks, and solid-state storage—is struggling to keep up. Most generated data are discarded, but a significant portion still needs to be stored. Unfortunately, the proportion of data that can be retained is declining. In fact, 90% of all data worldwide ever was generated in the last two years. This challenge motivates researchers to seek new storage media.

    The Need for Density, Durability, and Energy Efficiency

    When evaluating storage media, several critical factors come into play:

    • Density: How many bits can be stored per unit of physical volume?
    • Retention: How long can the data remain recoverable?
    • Access Speed: What’s the latency and bandwidth for accessing data?
    • Energy Cost: How much energy required for both data at rest and per access?

    For archival storage—aimed at storing vast amounts of data for the long term—density, durability, and energy cost at rest are overriding. Traditional storage technologies alter material properties (electrical, optical, or magnetic) to encode data. However, these technologies are approaching their limits in all above terms.

    And that’s where molecular data storage comes handy.

    Molecular Data Storage

    Unlike SSDs, which rely on altering material properties such as electrical, optical, or magnetic characteristics to encode data, molecular storage operates at a much smaller scale. It actually uses individual molecules to store information. This molecular-scale approach allows for incredibly dense storage, as data can be stored in the precise arrangement of atoms within molecules.

    A team of researchers from Brown University, for instance, has made some great progress in molecular data storage. Their work involves custom libraries of small molecules designed explicitly for data storage. The team successfully stored and retrieved over 200 kilobytes of digital image files by encoding the data in mixtures of these custom-synthesized small molecules. This may, indeed, seem modest compared to traditional storage methods, but it represents significant progress in the field of molecular storage. The image files included a Picasso drawing, an image of the Egyptian god Anubis, and others.

    They employed small metal plates arranged with minuscule spots, each measuring less than a millimeter in diameter. Within each spot resided a blend of molecules, with the presence or absence of various molecules in each blend signifying digital data. The quantity of bits within each blend could extend to the assortment of unique molecules accessible for blending. Subsequently, the data could be retrieved using a mass spectrometer, which distinguished the molecules existing in each well.

    DNA: A Remarkable Alternative

    Now, enter DNA—a molecule that stands out as an especially attractive alternative for data storage.

    DNA storage involves encoding digital information into DNA sequences. It encompasses the following steps:

    • Encoding: Converting 0–1 binary codes (representing digital data) to A-T-C-G quaternary codes (combinations of nucleotides).
    • Synthesis: Writing the DNA sequences into actual DNA molecules.
    • Storage: Physically conditioning and organizing the synthesized DNA into a library for long-term storage.
    • Random Access: Retrieving and selectively accessing specific DNA sequences.
    • Sequencing: Reading the molecules and converting them back to digital data.

    And here’s why DNA data storage is a highly attractive option:

    Density: Using DNA, we can achieve an astonishing density of up to 10^18 bytes per mm^3—approximately six orders of magnitude denser than any existing media. Imagine encoding vast libraries of information within minuscule volumes!

    Durability: DNA molecules, when kept away from light, humidity, and reasonable temperatures, can last for centuries to millennia. Compare this to the typical lifetime of commercial tape or optical disks, which is merely decades.

    Ease of Replication: DNA replication, facilitated by techniques like PCR (polymerase chain reaction), allows us to copy large amounts of data at minimal time and resource cost. Imagine effortlessly creating backups of your entire digital archive!

    Operations Over Data: Once data are stored in DNA, we can leverage the DNA hybridization process to perform operations—such as image similarity searches—directly on the data.

    Eternal Relevance: DNA sequencers (readers) are eternally relevant due to their expanding use in life sciences and medicine. As long as there are sequencers, DNA data storage remains viable.

    Bottom Line

    For now, small molecules like porphyrins and fullerenes, rather than long-chain polymers like DNA, have become a focus of interest. Small molecules offer advantages such as ease of production and potentially higher storage capacity. However, the biotechnology industry’s rapid progress in DNA manipulation for life sciences purposes bodes well for data storage applications.


  • UN-adopted first AI resolution addresses major issues but falls short of being futuristic enough

    UN-adopted first AI resolution addresses major issues but falls short of being futuristic enough

    The United Nations has unanimously adopted the first global resolution on artificial intelligence. The resolution on AI is a big deal in how it is managed. It’s all about using AI for the greater good, with a special group set up to advise on how to govern AI worldwide. While the first UN-adopted resolution has sought to address various critical aspects of AI development, it has fallen short of embodying a truly futuristic approach that adequately anticipates and navigates the complexities of AI’s impact on society.

    Reaffirming the UN’s commitment to international law, human rights, and sustainable development goals (SDGs), the preamble of the resolution reads: “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development The General Assembly, Reaffirming international law, in particular the Charter of the United Nations, and recalling the Universal Declaration of Human Rights, . . .”

    The historical in international document has also acknowledged previous resolutions and declarations concerning technology and human rights.

    Recognizing opportunities and risks

    The long-awaited resolution has pointed out that AI possesses both positive and negative aspects. On one hand, AI can accelerate progress towards Sustainable Development Goals (SDGs) by addressing global issues like poverty, health, food security, climate change, energy, and education.

    However, the resolution has also acknowledges the risks associated with AI if not designed and deployed properly. These risks include spreading misinformation, amplifying biases, violating privacy, and the potential for AI manipulation.

    To address these concerns, the resolution has given emphasis to the importance of developing safe, secure, and trustworthy AI systems. It has also presented the necessity for AI development to adhere to international laws and human rights principles.

    “Emphasizes that . . . Member States and, where applicable, other stakeholders to refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights, especially of those who are in vulnerable situations, and reaffirms that the same rights that people have offline must also be protected online, including throughout the life cycle of artificial intelligence systems,” the resolution affirms.

    This ensures that AI is used responsibly, safeguarding individuals and society from potential harm.

    Bridging divides and promoting inclusivity

    The UN-adopted resolution’s focus on narrowing digital gaps deserves admiration. It acknowledges the differences in technological progress, where developed nations lead in AI advancements while developing countries often lag behind. This gap in digital access can worsen social and economic disparities.

    In response, the resolution highlights the importance of helping developing nations build their technological capabilities.

    “. . . and providing support for the mitigation of potential negative consequences for workforces, especially in developing countries, in particular the least developed countries, and fostering programmes aimed at digital training, capacity building, supporting innovation and enhancing access to benefits of artificial intelligence systems,” the resolution states.

    Enhancing engineering expertise in these countries, for instance, is crucial for sustainable development and better infrastructure. Collaborating with international organizations and NGOs can provide valuable support in terms of knowledge, funding, and technical assistance.

    Furthermore, the resolution emphasizes the need for inclusive governance in AI development. It stresses the importance of considering the needs and capacities of both developed and developing countries. While developed nations may have more resources and expertise in AI, developing countries may face unique challenges requiring tailored solutions.

    Promoting ethical AI practices

    The UN resolution strongly underscores the importance of ethical considerations in the development of AI. Ensuring that AI systems are built and used responsibly is critical. It’s crucial to design AI systems in a way that promotes fairness, avoids discrimination, and enhances accessibility for example.

    Respecting human rights is a core part of these ethical considerations. Since AI has the potential to significantly impact human life, it’s vital to develop and employ AI systems while respecting and upholding human rights.

    Preserving privacy is another essential aspect of ethical AI development. AI often deals with sensitive data, so it’s important to handle this information responsibly to safeguard individuals’ privacy. Practices like maintaining good data practices and using representative data sets can help protect privacy.

    Addressing biases is also crucial in AI development. Biases in AI systems can result in unfair or discriminatory outcomes. Therefore, it’s important to identify and mitigate these biases during the AI development process.

    The resolution encourages the adoption of regulatory frameworks and governance approaches that support responsible AI innovation. “Encourages . . .academia and research institutions and technical communities, to provide and promote fair, open, inclusive and non-discriminatory business environment, . . . as well as encourages Member States to develop policies and regulations to promote competition in safe, secure and trustworthy artificial intelligence systems and related technologies . . . ,” the resolution explains.

    These frameworks and approaches can help ensure that AI systems are developed and used responsibly, while also minimizing potential risks.

    Transparency, accountability, and human oversight are emphasized throughout the AI life cycle. Transparency ensures that the workings of AI systems are clear and understandable. Accountability ensures that AI systems and their outcomes are fair and justifiable. Human oversight ensures that humans retain control over AI systems throughout their life cycle.

    The resolution has aimed to realistically address concerns related to algorithmic discrimination and privacy infringement. Algorithmic discrimination can occur when AI systems contribute to unjustified differential treatment based on certain characteristics. Privacy infringement can occur when AI systems misuse or mishandle sensitive data.

    Utilizing data for sustainable development

    The resolution recognizes the crucial role of data in AI systems. AI’s exceptional ability to utilize data makes it an invaluable asset for promoting sustainable development, as stated by the resolution.

    The UN-adopted resolution reads: “Resolves to promote safe, secure and trustworthy artificial intelligence systems to accelerate progress towards the full realization of the 2030 Agenda for Sustainable Development. . .”

    Take, for instance, its capacity to provide analytical insights for biodiversity projects such as those focused on coral reefs.

    Moreover, the resolution underscores the significance of fair, inclusive, and efficient data management practices. This involves establishing standardized procedures for collecting, storing, and utilizing data across the organization, defining protocols for data classification and security based on sensitivity levels, implementing processes to maintain data accuracy and consistency, and enacting policies to manage data throughout its lifecycle.

    In the resolution, there’s also a call for international collaboration and support to enhance data infrastructure and accessibility. For instance, organizations like the International Telecommunication Union (ITU) are fully dedicated to assisting member states in implementing ICT accessibility policies worldwide, ensuring equitable inclusion in digital societies, economies, and environments regardless of age, gender, ability, or location.

    Furthermore, the resolution advocates for trusted cross-border data flows. The challenge lies in creating a global digital framework that facilitates data movement across borders while ensuring appropriate oversight and protection, a principle termed ‘data free flow with trust’ (DFFT).

    By advocating for inclusive and consistent data governance practices, the resolution seeks to harness AI’s potential for sustainable development responsibly. This approach ensures that AI development and usage prioritize the well-being of individuals and society, guarding against potential harm.

    Looking towards the future

    In fact, the resolution provides a thorough overview of the current challenges and opportunities presented by AI. It covers important areas like inclusivity, ethics, and data governance. However, it doesn’t fully embrace a futuristic approach to governing AI.

    One of its shortcomings is the absence of a clear roadmap for dealing with rapidly emerging AI technologies and their potential impacts on society. For instance, the resolution doesn’t adequately tackle the regulatory challenges posed by both general and specific AI tools, nor does it address issues such as misinformation, deepfakes, and surveillance in depth.

    Additionally, the resolution could benefit from stronger mechanisms for monitoring and adapting to the rapid pace of technological advancements. Effective AI governance should involve continuous monitoring from the inception of a technology to its implementation and beyond. This includes anticipating and addressing unintended consequences and existential risks promptly and effectively.

    Resolution receives positive reception

    The United States led the resolution, with support from over 120 other Member States. It passed unanimously, without any objections.

    Many in the AI industry welcomed the resolution. Brad Smith, Microsoft’s Vice Chair and President, expressed full support saying, “We fully support the @UN’s adoption of the comprehensive AI resolution. The consensus reached today marks a critical step towards establishing international guardrails for the ethical and sustainable development of AI, ensuring this technology serves the needs of everyone.”

    “The United States also welcomes the UN General Assembly’s adoption of a resolution setting out principles for the deployment and use of artificial intelligence (AI),” Vice President Harris said in a statement.

    China and Russia, along with over 120 member nations, co-sponsored the resolution. The UK, another co-sponsor, has already shown interest in AI regulation. Based on the National AI Strategy and the Science and Technology Framework, they have adopted a pro-innovation approach to AI regulation, aiming to create a proportionate, future-proof, and pro-innovation framework.

  • Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors

    Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors

    Artificial intelligence has presented remarkable opportunities to reduce mistakes, aid medical staff and offer patient services around the clock. As AI tools improve, there’s increasing potential to use them extensively in interpreting medical images, X-rays, and scans, diagnosing medical issues, and planning treatments. A new development has emerged in cancer detection: using AI in the National Health Service has shown how technology can find very small signs of breast cancer that doctors might miss.

    Mia, an AI tool tested with NHS doctors, looked at mammograms from over 10,000 women and found 11 cases of breast cancer that doctors hadn’t spotted. These cancers were caught very early, when they were hard to see, showing how AI can help find cancer sooner.

    Barbara was one of eleven patients who benefited from Mia’s advanced detection capabilities. Her case clearly demonstrates how AI can be instrumental in saving lives. Even though human radiologists didn’t catch it, Mia spotted Barbara’s 6mm tumor early on. This meant Barbara could get surgery quickly and only needed five days of radiotherapy. And, according to radiologist, patients with breast tumors smaller than 15mm usually have a good chance of survival, with a 90% rate over the next five years.

    Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors.
    Photo Credit: BBC

    BBC reported Barbara as saying that she was pleased the treatment was much less invasive than that of her sister and mother, who had previously also battled the disease.

    As Barbara had not experienced any noticeable symptoms, her cancer may not have been detected until her next routine mammogram three years later without the assistance of the AI tool.

    Mia and similar tools are expected to speed up the process of getting test results, potentially reducing the wait from 14 days to just three, as claimed by Kheiron, the developer. In the trial, Mia’s findings were always reviewed by humans. While currently, two radiologists examine each scan, there’s hope that eventually, one of them could be replaced by the AI tool, lightening the workload for each pair.

    Out of nearly 10,889 women in the trial, only 81 chose not to have their scans reviewed by the AI tool, according to Dr. Gerald Lip, the clinical director of breast screening in northwest Scotland who led the project.

    This shows that, AI tools, like Mia, are skilled at detecting symptoms of specific diseases if they’re trained on enough diverse data. This involves giving the program many different anonymized images of these symptoms from a wide range of people.

    Mia took six years to develop and train, according to Sarah Kerruish, Chief Strategy Officer of Kheiron Medical. It operates using cloud computing power from Microsoft and was trained on “millions” of mammograms sourced from “women all over the world”.

    Kerruish gave emphasis to the importance of inclusivity in developing AI for healthcare, reportedly saying, “I think the most important thing I’ve learned is that when you’re developing AI for a healthcare situation, you have to build in inclusivity from day one.

    But wait a moment! Let’s not overlook Mia’s imperfections. Sure, it’s a remarkable tool, but it’s not without its flaws. One major limitation is its lack of access to patients’ medical histories. This means it might flag cysts that were already deemed harmless in previous scans.

    In addition, Mia’s machine learning feature is disabled due to current health regulations that focus on the risks and biases of machine-learning algorithms at the level of input data, algorithm testing, and decision models. So, it can’t learn from its mistakes or improve over time. Each update requires a fresh review, adding to the workload.

    It’s also worthy to note that the Mia trial is just an initial test in one location. Although the University of Aberdeen validated the research independently, the results haven’t undergone peer review yet.

    Still, the Royal College of Radiologists acknowledges the potential of this technology. “These results are encouraging and help to highlight the exciting potential AI presents for diagnostics. There is no question that real-life clinical radiologists are essential and irreplaceable, but a clinical radiologist using insights from validated AI tools will increasingly be a formidable force in patient care,” said Dr Katharine Halliday, President of the Royal College of Radiologists.

    Dr. Julie Sharp, from Cancer Research UK, stresses the crucial role of technological innovation in healthcare, especially with the growing number of cancer cases.

    “More research will be needed to find the best ways to use this technology to improve outcomes for cancer patients,” she added.

    Furthermore, various other healthcare-related AI trials are underway across the UK. For example, Presymptom Health is developing an AI tool to analyze blood samples for early signs of sepsis before symptoms manifest.

    Mia has sparked hope among potential cancer patients; however, many of such trials are still in their infancy, awaiting published results.

  • Utilizing quantum entanglement for instantaneous, secure communication across vast distances?

    Utilizing quantum entanglement for instantaneous, secure communication across vast distances?

    Quantum mesh networking, an advanced frontier in communication technology, promises to revolutionize data transmission across vast distances.

    At its core is quantum entanglement, a phenomenon that defies conventional understanding and offers unprecedented opportunities for secure and instantaneous communication.

    Quantum entanglement, a fundamental aspect of quantum mechanics, describes the intrinsic connection between particles regardless of their separation in space. This concept challenges traditional notions of locality and opens the door to groundbreaking applications in networking.

    The appeal of quantum entanglement lies in its ability to enable instantaneous information transfer between entangled particles, regardless of the physical distance between them. This property forms the foundation of quantum mesh networking, facilitating ultra-fast and secure communication over long distances.

    Central to the realization of quantum mesh networking is the use of quantum bits, or qubits, as carriers of information. Unlike classical bits, which exist in either a 0 or 1 state, qubits can exist in a superposition of both states simultaneously. This vastly (or as I would say, unimaginably) increases the computational power and information storage capacity.

    In quantum mesh networks, nodes equipped with quantum processors serve as the basic units. These nodes are interconnected through quantum entanglement, forming a tough network architecture capable of withstanding disruptions; and ensuring:

    Instantaneous Communication

    One of the most remarkable features of quantum mesh networking is its ability to achieve instantaneous communication. Through the phenomenon of entanglement, information can be transmitted between entangled particles faster than the speed of light. This instantaneous transmission opens up possibilities for real-time communication across vast distances, revolutionizing the way we connect and collaborate.

    Secure Communication

    In addition to speed, quantum mesh networking offers unparalleled security. The entanglement of particles ensures that any attempt to intercept or eavesdrop on the communication would disrupt the entanglement, alerting the sender and receiver to potential security breaches. This phenomenon, known as quantum key distribution, provides a level of security that is theoretically unbreakable, even with advanced cryptographic techniques.

    Overcoming Distance Limitations

    Traditional communication methods are often limited by distance, with signal degradation occurring over long transmission paths. Quantum mesh networking transcends these limitations by leveraging entanglement to maintain coherence over vast distances. This enables seamless communication between nodes regardless of their geographical separation, making it an ideal solution for applications such as satellite communication and interplanetary exploration.


    A key aspect of quantum mesh networking is entanglement swapping, a process through which distant qubits become entangled indirectly via intermediary entangled particles, extending the reach of quantum communication beyond physical limitations.

    The conventional method of quantum mesh networking utilizes quantum key distribution (QKD) to ensure data security. QKD makes use of quantum randomness to generate cryptographic keys immune to eavesdropping. This, of course, guarantees end-to-end encryption and protecting sensitive information.

    However, the implementation of quantum mesh networks does face challenges. Quantum systems are susceptible to environmental noise and decoherence, which can degrade entangled states and compromise data integrity. Addressing these challenges would necessitate the development of advanced error correction techniques and robust quantum error correction codes. And that’s not the only concern. The second one is scalability, as the complexity of entanglement distribution grows exponentially with the number of network nodes. Overcoming this challenge, once again, demands innovative approaches, particularly to qubit storage, manipulation, and entanglement generation.

    Despite these obstacles, recent advancements in quantum technology have brought quantum mesh networking closer to reality. Experimental demonstrations of quantum entanglement over large distances, along with the development of quantum repeaters and entanglement purification protocols, signify progress toward practical implementation.

    Looking forward, the widespread adoption of quantum mesh networking holds some promise across various sectors. From secure data transmission to the realization of a quantum internet, the possibilities wide.

    You may also like:

  • Algorithm FeatUp can capture high and low-level resolutions simultaneously

    Algorithm FeatUp can capture high and low-level resolutions simultaneously

    Can you imagine a world where computer vision algorithms not only grasp the big picture but also capture every intricate detail with pixel-perfect accuracy? Due to a new algorithm called FeatUp developed by MIT researchers and published on March 15 on ArXiv, this vision is now a reality. The FeatUp algorithm can capture both high and low-level resolutions at the same time. Its remarkable ability for preserving even the tiniest details while also extracting important high-level features from visual data is unmatched.

    Traditional computer vision algorithms are good at understanding the big picture in images, but they struggle to keep all the small details, according to the researchers.

    “The essence of all computer vision lies in these deep, intelligent features that emerge from the depths of deep learning architectures. The big challenge of modern algorithms is that they reduce large images to very small grids of ‘smart’ features, gaining intelligent insights but losing the finer details,” says Mark Hamilton, an MIT PhD student in electrical engineering and computer science, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) affiliate, and a co-lead author on a paper about the project.

    FeatUp has now changed this by helping algorithms to see both the big picture and the small details at the same time. It’s like upgrading a computer’s vision to have sharp eyesight, similar to how Lasik eye surgery improves human vision.

    “FeatUp helps enable the best of both worlds: highly intelligent representations with the original image’s resolution. These high-resolution features significantly boost performance across a spectrum of computer vision tasks, from enhancing object detection and improving depth prediction to providing a deeper understanding of your network’s decision-making process through high-resolution analysis,” Hamilton says.

    As AI models become more common, there’s a growing need to understand how they work and what they’re focusing on. Hamilton says, FeatUp works by tweaking images slightly and observing how algorithms react.

    The FeatUp training architecture. FeatUp learns to upsample features through a consistency loss on low resolution “views” of a model’s features that arise from slight transformations of the input image. Description/Image Source: arXiv

    “We imagine that some high-resolution features exist, and that when we wiggle them and blur them, they will match all of the original, lower-resolution features from the wiggled images.”

    This process creates many slightly different deep-feature maps, which are then combined into a single clear and high-resolution set of features. According to Hamilton, the idea is to refine low-resolution features into high-resolution ones by essentially playing a matching game.

    “Our goal is to learn how to refine the low-resolution features into high-resolution features using this ‘game’ that lets us know how well we are doing.”

    This approach is similar to how algorithms build 3D models from multiple 2D images, ensuring the predicted 3D object matches all the 2D photos used. With FeatUp, the goal is to predict a high-resolution feature map that matches all the low-resolution feature maps created from variations of the original image.

    The team also developed a special layer for deep neural networks to make this process more efficient. This improvement benefits many algorithms, like those used for identifying objects in images.

    “Another application is something called small object retrieval, where our algorithm allows for precise localization of objects. For example, even in cluttered road scenes algorithms enriched with FeatUp can see tiny objects like traffic cones, reflectors, lights, and potholes where their low-resolution cousins fail. This demonstrates its capability to enhance coarse features into finely detailed signals,” says Stephanie Fu ’22, MNG ’23, a PhD student at the University of California at Berkeley and another co-lead author on the new FeatUp paper.

    FeatUp isn’t just useful for understanding algorithms; it also helps with practical tasks like spotting small objects in cluttered scenes, such as on busy roads. This could be crucial for things like self-driving cars.

    “This is especially critical for time-sensitive tasks, like pinpointing a traffic sign on a cluttered expressway in a driverless car. This can not only improve the accuracy of such tasks by turning broad guesses into exact localizations, but might also make these systems more reliable, interpretable, and trustworthy,” Hamilton explains.

    Moreover, FeatUp’s flexibility is evident as it smoothly integrates with existing deep learning setups without requiring extensive retraining. This allows researchers and professionals to easily employ FeatUp to enhance the accuracy and effectiveness of various computer vision tasks, such as object detection and semantic segmentation.

    For instance, if we use FeatUp before examining the predictions of a lung cancer detection algorithm using methods like class activation maps (CAM), we can get a much clearer (16-32 times) picture of where the tumor might be located according to the model.

    The team hopes FeatUp will become a standard tool in deep learning in the days to come, allowing models to see more details without slowing down. Experts also praise FeatUp for its simplicity and effectiveness, saying it could make a big difference in image analysis tasks.

    “FeatUp represents a wonderful advance towards making visual representations really useful, by producing them at full image resolutions,” says Cornell University computer science professor Noah Snavely, who was not involved in the research.

    The research team has planned to share their work at a conference in May.

  • A common backyard insect inspires innovative device design

    A common backyard insect inspires innovative device design

    Invention seldom takes place as planned. British pharmacist John Walker, who in 1827 accidentally ignited a coated stick while experimenting with chemicals. Walker’s chance discovery prompted advancements in matchstick technology. In the same way, new research led by Penn State engineers has uncovered remarkable properties of brochosomes, tiny particles secreted and coated by leafhoppers, inspiring a rise in innovation in next-generation technology devices.

    Leafhoppers have long puzzled scientists with the way they use their brochosomes. These particles, resembling miniature soccer balls with hollow interiors, were first observed in the 1950s. By replicating the complex geometry of brochosomes, the researchers have now revealed their ability to absorb both visible and ultraviolet (UV) light.

    This is the first time “we are able to make the exact geometry of the natural brochosome,” Wong said, explaining that the researchers were able to create scaled synthetic replicas of the brochosome structures by using advanced 3D-printing technology.

    How did they figure this out?

    The team made a larger version of brochosomes, about 20,000 nanometers in size, using advanced 3D printing. They carefully copied the shape, structure, and pore arrangement of these particles to study them closely.

    Using a Micro-Fourier transform infrared (FTIR) spectrometer, they examined how brochosomes interact with different types of infrared light. This helped them understand how these particles manipulate light.

    In the future, the researchers said they have planned to improve the production process of synthetic brochosomes to match the size of natural ones more closely. They also aim to explore other uses for synthetic brochosomes, like in encryption systems where data can only be seen under specific light conditions.

    Replicating intricate brochosomes geometry

    The key to unlocking the potential of brochosomes lies in their precise geometry. Despite being known for decades, replicating brochosomes in the lab has been a tough challenge due to their intricate structure.

    Wang’s team overcame this hurdle using two-photon polymerization 3D printing method, producing synthetic brochosomes with remarkable optical properties. These faux brochosomes, while larger in scale, closely mimic the size and morphology of their natural counterparts.

    A common backyard insect inspires innovative device design

    Leafhopper and its brochosomes. (A) An optical image of a leafhopper Gyponana serpenta. (B) A scanning electron microscopy (SEM) image of the leafhopper wing (highlighted area in panel A). (C and D) SEM images of brochosomes on the leafhopper wing, revealing their hollow buckyball-like geometry. (E) An SEM image showing the cross-section of a natural brochosome cleaved by the focused ion beam (FIB) technique. (F) The relationship between the diameter of brochosome through-holes and the diameter of brochosomes across different leafhopper species. Brochosome diameter and hole diameter were determined from our experimental measurements and a literature source (18). The fitted dashed line indicates that the through-hole diameters are approximately 28% of the corresponding brochosome diameters. Description/Image Credit: pnas.org

    The consistency in brochosome geometry across leafhopper species is particularly intriguing. Regardless of the insect’s body size, brochosomes maintain a uniform diameter and pore size. This uniformity suggests an evolutionary advantage, enabling leafhoppers to effectively manipulate light to evade predators. By absorbing UV light and scattering visible light, brochosomes create an anti-reflective shield, reducing the insect’s visibility to UV-sensitive predators like birds and reptiles.

    Moreover, the densely packed arrangement of brochosomes on leafhopper wings further enhances their anti-reflective properties. Through careful experimentation and analysis, the researchers demonstrated how brochosomes minimize light reflection through both Mie scattering and through-hole absorption effects. These findings provide a physical basis for understanding leafhopper behavior and evolution.

    Importance of this approach

    The implications of this discovery are far-reaching, according to the researchers. Mimicking nature’s design, bioinspired optical materials could revolutionize various fields, from invisible cloaking devices to more efficient solar energy harvesting.

    Lin Wang, the lead author of the study, highlights the potential for thermal invisibility cloaks based on leafhopper-inspired technology. By regulating light reflection, these devices could obscure thermal signatures, offering applications in military stealth or even consumer products.

    “Nature has been a good teacher for scientists to develop novel advanced materials,” Wang said. “In this study, we have just focused on one insect species, but there are many more amazing insects out there that are waiting for material scientists to study, and they may be able to help us solve various engineering problems. They are not just bugs; they are inspirations.”

    Stealth tech takes inspiration from backyard insect for invisibility innovation

    Inspired by leafhoppers, common insects found in backyards, researchers have started to develop a new generation of invisibility devices. Early this year, Chinese scientists from Zhejiang University introduced a game-changing technology called the ‘Guardian of Drone’: an intelligent aero amphibious invisibility cloak.

    A common backyard insect inspires innovative device design
    Credit: Zhejiang University

    As reported in Advanced Photonics in January 12, this drone smoothly integrates perception, decision-making, and execution functionalities. The key breakthrough lies in the manipulation of tunable metasurfaces, enabling precise control over scattering patterns across various spatial and frequency domains through spatiotemporal modulation.


    Still there are challenges to overcome in increasing the production of synthetic brochosomes and exploring their further applications. Their future research will focus on improving how we make them and finding new ways to use them, Wang said.