Author: Britney Foster

  • DNA Data Storage in the Era of Molecular Advancements

    DNA Data Storage in the Era of Molecular Advancements

    Information production has been growing exponentially in our data-driven era. However, the installed base of mainstream storage technologies—such as magnetic drives, optical disks, and solid-state storage—is struggling to keep up. Most generated data are discarded, but a significant portion still needs to be stored. Unfortunately, the proportion of data that can be retained is declining. In fact, 90% of all data worldwide ever was generated in the last two years. This challenge motivates researchers to seek new storage media.

    The Need for Density, Durability, and Energy Efficiency

    When evaluating storage media, several critical factors come into play:

    • Density: How many bits can be stored per unit of physical volume?
    • Retention: How long can the data remain recoverable?
    • Access Speed: What’s the latency and bandwidth for accessing data?
    • Energy Cost: How much energy required for both data at rest and per access?

    For archival storage—aimed at storing vast amounts of data for the long term—density, durability, and energy cost at rest are overriding. Traditional storage technologies alter material properties (electrical, optical, or magnetic) to encode data. However, these technologies are approaching their limits in all above terms.

    And that’s where molecular data storage comes handy.

    Molecular Data Storage

    Unlike SSDs, which rely on altering material properties such as electrical, optical, or magnetic characteristics to encode data, molecular storage operates at a much smaller scale. It actually uses individual molecules to store information. This molecular-scale approach allows for incredibly dense storage, as data can be stored in the precise arrangement of atoms within molecules.

    A team of researchers from Brown University, for instance, has made some great progress in molecular data storage. Their work involves custom libraries of small molecules designed explicitly for data storage. The team successfully stored and retrieved over 200 kilobytes of digital image files by encoding the data in mixtures of these custom-synthesized small molecules. This may, indeed, seem modest compared to traditional storage methods, but it represents significant progress in the field of molecular storage. The image files included a Picasso drawing, an image of the Egyptian god Anubis, and others.

    They employed small metal plates arranged with minuscule spots, each measuring less than a millimeter in diameter. Within each spot resided a blend of molecules, with the presence or absence of various molecules in each blend signifying digital data. The quantity of bits within each blend could extend to the assortment of unique molecules accessible for blending. Subsequently, the data could be retrieved using a mass spectrometer, which distinguished the molecules existing in each well.

    DNA: A Remarkable Alternative

    Now, enter DNA—a molecule that stands out as an especially attractive alternative for data storage.

    DNA storage involves encoding digital information into DNA sequences. It encompasses the following steps:

    • Encoding: Converting 0–1 binary codes (representing digital data) to A-T-C-G quaternary codes (combinations of nucleotides).
    • Synthesis: Writing the DNA sequences into actual DNA molecules.
    • Storage: Physically conditioning and organizing the synthesized DNA into a library for long-term storage.
    • Random Access: Retrieving and selectively accessing specific DNA sequences.
    • Sequencing: Reading the molecules and converting them back to digital data.

    And here’s why DNA data storage is a highly attractive option:

    Density: Using DNA, we can achieve an astonishing density of up to 10^18 bytes per mm^3—approximately six orders of magnitude denser than any existing media. Imagine encoding vast libraries of information within minuscule volumes!

    Durability: DNA molecules, when kept away from light, humidity, and reasonable temperatures, can last for centuries to millennia. Compare this to the typical lifetime of commercial tape or optical disks, which is merely decades.

    Ease of Replication: DNA replication, facilitated by techniques like PCR (polymerase chain reaction), allows us to copy large amounts of data at minimal time and resource cost. Imagine effortlessly creating backups of your entire digital archive!

    Operations Over Data: Once data are stored in DNA, we can leverage the DNA hybridization process to perform operations—such as image similarity searches—directly on the data.

    Eternal Relevance: DNA sequencers (readers) are eternally relevant due to their expanding use in life sciences and medicine. As long as there are sequencers, DNA data storage remains viable.

    Bottom Line

    For now, small molecules like porphyrins and fullerenes, rather than long-chain polymers like DNA, have become a focus of interest. Small molecules offer advantages such as ease of production and potentially higher storage capacity. However, the biotechnology industry’s rapid progress in DNA manipulation for life sciences purposes bodes well for data storage applications.


  • UN-adopted first AI resolution addresses major issues but falls short of being futuristic enough

    UN-adopted first AI resolution addresses major issues but falls short of being futuristic enough

    The United Nations has unanimously adopted the first global resolution on artificial intelligence. The resolution on AI is a big deal in how it is managed. It’s all about using AI for the greater good, with a special group set up to advise on how to govern AI worldwide. While the first UN-adopted resolution has sought to address various critical aspects of AI development, it has fallen short of embodying a truly futuristic approach that adequately anticipates and navigates the complexities of AI’s impact on society.

    Reaffirming the UN’s commitment to international law, human rights, and sustainable development goals (SDGs), the preamble of the resolution reads: “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development The General Assembly, Reaffirming international law, in particular the Charter of the United Nations, and recalling the Universal Declaration of Human Rights, . . .”

    The historical in international document has also acknowledged previous resolutions and declarations concerning technology and human rights.

    Recognizing opportunities and risks

    The long-awaited resolution has pointed out that AI possesses both positive and negative aspects. On one hand, AI can accelerate progress towards Sustainable Development Goals (SDGs) by addressing global issues like poverty, health, food security, climate change, energy, and education.

    However, the resolution has also acknowledges the risks associated with AI if not designed and deployed properly. These risks include spreading misinformation, amplifying biases, violating privacy, and the potential for AI manipulation.

    To address these concerns, the resolution has given emphasis to the importance of developing safe, secure, and trustworthy AI systems. It has also presented the necessity for AI development to adhere to international laws and human rights principles.

    “Emphasizes that . . . Member States and, where applicable, other stakeholders to refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights, especially of those who are in vulnerable situations, and reaffirms that the same rights that people have offline must also be protected online, including throughout the life cycle of artificial intelligence systems,” the resolution affirms.

    This ensures that AI is used responsibly, safeguarding individuals and society from potential harm.

    Bridging divides and promoting inclusivity

    The UN-adopted resolution’s focus on narrowing digital gaps deserves admiration. It acknowledges the differences in technological progress, where developed nations lead in AI advancements while developing countries often lag behind. This gap in digital access can worsen social and economic disparities.

    In response, the resolution highlights the importance of helping developing nations build their technological capabilities.

    “. . . and providing support for the mitigation of potential negative consequences for workforces, especially in developing countries, in particular the least developed countries, and fostering programmes aimed at digital training, capacity building, supporting innovation and enhancing access to benefits of artificial intelligence systems,” the resolution states.

    Enhancing engineering expertise in these countries, for instance, is crucial for sustainable development and better infrastructure. Collaborating with international organizations and NGOs can provide valuable support in terms of knowledge, funding, and technical assistance.

    Furthermore, the resolution emphasizes the need for inclusive governance in AI development. It stresses the importance of considering the needs and capacities of both developed and developing countries. While developed nations may have more resources and expertise in AI, developing countries may face unique challenges requiring tailored solutions.

    Promoting ethical AI practices

    The UN resolution strongly underscores the importance of ethical considerations in the development of AI. Ensuring that AI systems are built and used responsibly is critical. It’s crucial to design AI systems in a way that promotes fairness, avoids discrimination, and enhances accessibility for example.

    Respecting human rights is a core part of these ethical considerations. Since AI has the potential to significantly impact human life, it’s vital to develop and employ AI systems while respecting and upholding human rights.

    Preserving privacy is another essential aspect of ethical AI development. AI often deals with sensitive data, so it’s important to handle this information responsibly to safeguard individuals’ privacy. Practices like maintaining good data practices and using representative data sets can help protect privacy.

    Addressing biases is also crucial in AI development. Biases in AI systems can result in unfair or discriminatory outcomes. Therefore, it’s important to identify and mitigate these biases during the AI development process.

    The resolution encourages the adoption of regulatory frameworks and governance approaches that support responsible AI innovation. “Encourages . . .academia and research institutions and technical communities, to provide and promote fair, open, inclusive and non-discriminatory business environment, . . . as well as encourages Member States to develop policies and regulations to promote competition in safe, secure and trustworthy artificial intelligence systems and related technologies . . . ,” the resolution explains.

    These frameworks and approaches can help ensure that AI systems are developed and used responsibly, while also minimizing potential risks.

    Transparency, accountability, and human oversight are emphasized throughout the AI life cycle. Transparency ensures that the workings of AI systems are clear and understandable. Accountability ensures that AI systems and their outcomes are fair and justifiable. Human oversight ensures that humans retain control over AI systems throughout their life cycle.

    The resolution has aimed to realistically address concerns related to algorithmic discrimination and privacy infringement. Algorithmic discrimination can occur when AI systems contribute to unjustified differential treatment based on certain characteristics. Privacy infringement can occur when AI systems misuse or mishandle sensitive data.

    Utilizing data for sustainable development

    The resolution recognizes the crucial role of data in AI systems. AI’s exceptional ability to utilize data makes it an invaluable asset for promoting sustainable development, as stated by the resolution.

    The UN-adopted resolution reads: “Resolves to promote safe, secure and trustworthy artificial intelligence systems to accelerate progress towards the full realization of the 2030 Agenda for Sustainable Development. . .”

    Take, for instance, its capacity to provide analytical insights for biodiversity projects such as those focused on coral reefs.

    Moreover, the resolution underscores the significance of fair, inclusive, and efficient data management practices. This involves establishing standardized procedures for collecting, storing, and utilizing data across the organization, defining protocols for data classification and security based on sensitivity levels, implementing processes to maintain data accuracy and consistency, and enacting policies to manage data throughout its lifecycle.

    In the resolution, there’s also a call for international collaboration and support to enhance data infrastructure and accessibility. For instance, organizations like the International Telecommunication Union (ITU) are fully dedicated to assisting member states in implementing ICT accessibility policies worldwide, ensuring equitable inclusion in digital societies, economies, and environments regardless of age, gender, ability, or location.

    Furthermore, the resolution advocates for trusted cross-border data flows. The challenge lies in creating a global digital framework that facilitates data movement across borders while ensuring appropriate oversight and protection, a principle termed ‘data free flow with trust’ (DFFT).

    By advocating for inclusive and consistent data governance practices, the resolution seeks to harness AI’s potential for sustainable development responsibly. This approach ensures that AI development and usage prioritize the well-being of individuals and society, guarding against potential harm.

    Looking towards the future

    In fact, the resolution provides a thorough overview of the current challenges and opportunities presented by AI. It covers important areas like inclusivity, ethics, and data governance. However, it doesn’t fully embrace a futuristic approach to governing AI.

    One of its shortcomings is the absence of a clear roadmap for dealing with rapidly emerging AI technologies and their potential impacts on society. For instance, the resolution doesn’t adequately tackle the regulatory challenges posed by both general and specific AI tools, nor does it address issues such as misinformation, deepfakes, and surveillance in depth.

    Additionally, the resolution could benefit from stronger mechanisms for monitoring and adapting to the rapid pace of technological advancements. Effective AI governance should involve continuous monitoring from the inception of a technology to its implementation and beyond. This includes anticipating and addressing unintended consequences and existential risks promptly and effectively.

    Resolution receives positive reception

    The United States led the resolution, with support from over 120 other Member States. It passed unanimously, without any objections.

    Many in the AI industry welcomed the resolution. Brad Smith, Microsoft’s Vice Chair and President, expressed full support saying, “We fully support the @UN’s adoption of the comprehensive AI resolution. The consensus reached today marks a critical step towards establishing international guardrails for the ethical and sustainable development of AI, ensuring this technology serves the needs of everyone.”

    “The United States also welcomes the UN General Assembly’s adoption of a resolution setting out principles for the deployment and use of artificial intelligence (AI),” Vice President Harris said in a statement.

    China and Russia, along with over 120 member nations, co-sponsored the resolution. The UK, another co-sponsor, has already shown interest in AI regulation. Based on the National AI Strategy and the Science and Technology Framework, they have adopted a pro-innovation approach to AI regulation, aiming to create a proportionate, future-proof, and pro-innovation framework.

  • Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors

    Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors

    Artificial intelligence has presented remarkable opportunities to reduce mistakes, aid medical staff and offer patient services around the clock. As AI tools improve, there’s increasing potential to use them extensively in interpreting medical images, X-rays, and scans, diagnosing medical issues, and planning treatments. A new development has emerged in cancer detection: using AI in the National Health Service has shown how technology can find very small signs of breast cancer that doctors might miss.

    Mia, an AI tool tested with NHS doctors, looked at mammograms from over 10,000 women and found 11 cases of breast cancer that doctors hadn’t spotted. These cancers were caught very early, when they were hard to see, showing how AI can help find cancer sooner.

    Barbara was one of eleven patients who benefited from Mia’s advanced detection capabilities. Her case clearly demonstrates how AI can be instrumental in saving lives. Even though human radiologists didn’t catch it, Mia spotted Barbara’s 6mm tumor early on. This meant Barbara could get surgery quickly and only needed five days of radiotherapy. And, according to radiologist, patients with breast tumors smaller than 15mm usually have a good chance of survival, with a 90% rate over the next five years.

    Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors.
    Photo Credit: BBC

    BBC reported Barbara as saying that she was pleased the treatment was much less invasive than that of her sister and mother, who had previously also battled the disease.

    As Barbara had not experienced any noticeable symptoms, her cancer may not have been detected until her next routine mammogram three years later without the assistance of the AI tool.

    Mia and similar tools are expected to speed up the process of getting test results, potentially reducing the wait from 14 days to just three, as claimed by Kheiron, the developer. In the trial, Mia’s findings were always reviewed by humans. While currently, two radiologists examine each scan, there’s hope that eventually, one of them could be replaced by the AI tool, lightening the workload for each pair.

    Out of nearly 10,889 women in the trial, only 81 chose not to have their scans reviewed by the AI tool, according to Dr. Gerald Lip, the clinical director of breast screening in northwest Scotland who led the project.

    This shows that, AI tools, like Mia, are skilled at detecting symptoms of specific diseases if they’re trained on enough diverse data. This involves giving the program many different anonymized images of these symptoms from a wide range of people.

    Mia took six years to develop and train, according to Sarah Kerruish, Chief Strategy Officer of Kheiron Medical. It operates using cloud computing power from Microsoft and was trained on “millions” of mammograms sourced from “women all over the world”.

    Kerruish gave emphasis to the importance of inclusivity in developing AI for healthcare, reportedly saying, “I think the most important thing I’ve learned is that when you’re developing AI for a healthcare situation, you have to build in inclusivity from day one.

    But wait a moment! Let’s not overlook Mia’s imperfections. Sure, it’s a remarkable tool, but it’s not without its flaws. One major limitation is its lack of access to patients’ medical histories. This means it might flag cysts that were already deemed harmless in previous scans.

    In addition, Mia’s machine learning feature is disabled due to current health regulations that focus on the risks and biases of machine-learning algorithms at the level of input data, algorithm testing, and decision models. So, it can’t learn from its mistakes or improve over time. Each update requires a fresh review, adding to the workload.

    It’s also worthy to note that the Mia trial is just an initial test in one location. Although the University of Aberdeen validated the research independently, the results haven’t undergone peer review yet.

    Still, the Royal College of Radiologists acknowledges the potential of this technology. “These results are encouraging and help to highlight the exciting potential AI presents for diagnostics. There is no question that real-life clinical radiologists are essential and irreplaceable, but a clinical radiologist using insights from validated AI tools will increasingly be a formidable force in patient care,” said Dr Katharine Halliday, President of the Royal College of Radiologists.

    Dr. Julie Sharp, from Cancer Research UK, stresses the crucial role of technological innovation in healthcare, especially with the growing number of cancer cases.

    “More research will be needed to find the best ways to use this technology to improve outcomes for cancer patients,” she added.

    Furthermore, various other healthcare-related AI trials are underway across the UK. For example, Presymptom Health is developing an AI tool to analyze blood samples for early signs of sepsis before symptoms manifest.

    Mia has sparked hope among potential cancer patients; however, many of such trials are still in their infancy, awaiting published results.

  • Utilizing quantum entanglement for instantaneous, secure communication across vast distances?

    Utilizing quantum entanglement for instantaneous, secure communication across vast distances?

    Quantum mesh networking, an advanced frontier in communication technology, promises to revolutionize data transmission across vast distances.

    At its core is quantum entanglement, a phenomenon that defies conventional understanding and offers unprecedented opportunities for secure and instantaneous communication.

    Quantum entanglement, a fundamental aspect of quantum mechanics, describes the intrinsic connection between particles regardless of their separation in space. This concept challenges traditional notions of locality and opens the door to groundbreaking applications in networking.

    The appeal of quantum entanglement lies in its ability to enable instantaneous information transfer between entangled particles, regardless of the physical distance between them. This property forms the foundation of quantum mesh networking, facilitating ultra-fast and secure communication over long distances.

    Central to the realization of quantum mesh networking is the use of quantum bits, or qubits, as carriers of information. Unlike classical bits, which exist in either a 0 or 1 state, qubits can exist in a superposition of both states simultaneously. This vastly (or as I would say, unimaginably) increases the computational power and information storage capacity.

    In quantum mesh networks, nodes equipped with quantum processors serve as the basic units. These nodes are interconnected through quantum entanglement, forming a tough network architecture capable of withstanding disruptions; and ensuring:

    Instantaneous Communication

    One of the most remarkable features of quantum mesh networking is its ability to achieve instantaneous communication. Through the phenomenon of entanglement, information can be transmitted between entangled particles faster than the speed of light. This instantaneous transmission opens up possibilities for real-time communication across vast distances, revolutionizing the way we connect and collaborate.

    Secure Communication

    In addition to speed, quantum mesh networking offers unparalleled security. The entanglement of particles ensures that any attempt to intercept or eavesdrop on the communication would disrupt the entanglement, alerting the sender and receiver to potential security breaches. This phenomenon, known as quantum key distribution, provides a level of security that is theoretically unbreakable, even with advanced cryptographic techniques.

    Overcoming Distance Limitations

    Traditional communication methods are often limited by distance, with signal degradation occurring over long transmission paths. Quantum mesh networking transcends these limitations by leveraging entanglement to maintain coherence over vast distances. This enables seamless communication between nodes regardless of their geographical separation, making it an ideal solution for applications such as satellite communication and interplanetary exploration.


    A key aspect of quantum mesh networking is entanglement swapping, a process through which distant qubits become entangled indirectly via intermediary entangled particles, extending the reach of quantum communication beyond physical limitations.

    The conventional method of quantum mesh networking utilizes quantum key distribution (QKD) to ensure data security. QKD makes use of quantum randomness to generate cryptographic keys immune to eavesdropping. This, of course, guarantees end-to-end encryption and protecting sensitive information.

    However, the implementation of quantum mesh networks does face challenges. Quantum systems are susceptible to environmental noise and decoherence, which can degrade entangled states and compromise data integrity. Addressing these challenges would necessitate the development of advanced error correction techniques and robust quantum error correction codes. And that’s not the only concern. The second one is scalability, as the complexity of entanglement distribution grows exponentially with the number of network nodes. Overcoming this challenge, once again, demands innovative approaches, particularly to qubit storage, manipulation, and entanglement generation.

    Despite these obstacles, recent advancements in quantum technology have brought quantum mesh networking closer to reality. Experimental demonstrations of quantum entanglement over large distances, along with the development of quantum repeaters and entanglement purification protocols, signify progress toward practical implementation.

    Looking forward, the widespread adoption of quantum mesh networking holds some promise across various sectors. From secure data transmission to the realization of a quantum internet, the possibilities wide.

    You may also like:

  • Algorithm FeatUp can capture high and low-level resolutions simultaneously

    Algorithm FeatUp can capture high and low-level resolutions simultaneously

    Can you imagine a world where computer vision algorithms not only grasp the big picture but also capture every intricate detail with pixel-perfect accuracy? Due to a new algorithm called FeatUp developed by MIT researchers and published on March 15 on ArXiv, this vision is now a reality. The FeatUp algorithm can capture both high and low-level resolutions at the same time. Its remarkable ability for preserving even the tiniest details while also extracting important high-level features from visual data is unmatched.

    Traditional computer vision algorithms are good at understanding the big picture in images, but they struggle to keep all the small details, according to the researchers.

    “The essence of all computer vision lies in these deep, intelligent features that emerge from the depths of deep learning architectures. The big challenge of modern algorithms is that they reduce large images to very small grids of ‘smart’ features, gaining intelligent insights but losing the finer details,” says Mark Hamilton, an MIT PhD student in electrical engineering and computer science, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) affiliate, and a co-lead author on a paper about the project.

    FeatUp has now changed this by helping algorithms to see both the big picture and the small details at the same time. It’s like upgrading a computer’s vision to have sharp eyesight, similar to how Lasik eye surgery improves human vision.

    “FeatUp helps enable the best of both worlds: highly intelligent representations with the original image’s resolution. These high-resolution features significantly boost performance across a spectrum of computer vision tasks, from enhancing object detection and improving depth prediction to providing a deeper understanding of your network’s decision-making process through high-resolution analysis,” Hamilton says.

    As AI models become more common, there’s a growing need to understand how they work and what they’re focusing on. Hamilton says, FeatUp works by tweaking images slightly and observing how algorithms react.

    The FeatUp training architecture. FeatUp learns to upsample features through a consistency loss on low resolution “views” of a model’s features that arise from slight transformations of the input image. Description/Image Source: arXiv

    “We imagine that some high-resolution features exist, and that when we wiggle them and blur them, they will match all of the original, lower-resolution features from the wiggled images.”

    This process creates many slightly different deep-feature maps, which are then combined into a single clear and high-resolution set of features. According to Hamilton, the idea is to refine low-resolution features into high-resolution ones by essentially playing a matching game.

    “Our goal is to learn how to refine the low-resolution features into high-resolution features using this ‘game’ that lets us know how well we are doing.”

    This approach is similar to how algorithms build 3D models from multiple 2D images, ensuring the predicted 3D object matches all the 2D photos used. With FeatUp, the goal is to predict a high-resolution feature map that matches all the low-resolution feature maps created from variations of the original image.

    The team also developed a special layer for deep neural networks to make this process more efficient. This improvement benefits many algorithms, like those used for identifying objects in images.

    “Another application is something called small object retrieval, where our algorithm allows for precise localization of objects. For example, even in cluttered road scenes algorithms enriched with FeatUp can see tiny objects like traffic cones, reflectors, lights, and potholes where their low-resolution cousins fail. This demonstrates its capability to enhance coarse features into finely detailed signals,” says Stephanie Fu ’22, MNG ’23, a PhD student at the University of California at Berkeley and another co-lead author on the new FeatUp paper.

    FeatUp isn’t just useful for understanding algorithms; it also helps with practical tasks like spotting small objects in cluttered scenes, such as on busy roads. This could be crucial for things like self-driving cars.

    “This is especially critical for time-sensitive tasks, like pinpointing a traffic sign on a cluttered expressway in a driverless car. This can not only improve the accuracy of such tasks by turning broad guesses into exact localizations, but might also make these systems more reliable, interpretable, and trustworthy,” Hamilton explains.

    Moreover, FeatUp’s flexibility is evident as it smoothly integrates with existing deep learning setups without requiring extensive retraining. This allows researchers and professionals to easily employ FeatUp to enhance the accuracy and effectiveness of various computer vision tasks, such as object detection and semantic segmentation.

    For instance, if we use FeatUp before examining the predictions of a lung cancer detection algorithm using methods like class activation maps (CAM), we can get a much clearer (16-32 times) picture of where the tumor might be located according to the model.

    The team hopes FeatUp will become a standard tool in deep learning in the days to come, allowing models to see more details without slowing down. Experts also praise FeatUp for its simplicity and effectiveness, saying it could make a big difference in image analysis tasks.

    “FeatUp represents a wonderful advance towards making visual representations really useful, by producing them at full image resolutions,” says Cornell University computer science professor Noah Snavely, who was not involved in the research.

    The research team has planned to share their work at a conference in May.

  • A common backyard insect inspires innovative device design

    A common backyard insect inspires innovative device design

    Invention seldom takes place as planned. British pharmacist John Walker, who in 1827 accidentally ignited a coated stick while experimenting with chemicals. Walker’s chance discovery prompted advancements in matchstick technology. In the same way, new research led by Penn State engineers has uncovered remarkable properties of brochosomes, tiny particles secreted and coated by leafhoppers, inspiring a rise in innovation in next-generation technology devices.

    Leafhoppers have long puzzled scientists with the way they use their brochosomes. These particles, resembling miniature soccer balls with hollow interiors, were first observed in the 1950s. By replicating the complex geometry of brochosomes, the researchers have now revealed their ability to absorb both visible and ultraviolet (UV) light.

    This is the first time “we are able to make the exact geometry of the natural brochosome,” Wong said, explaining that the researchers were able to create scaled synthetic replicas of the brochosome structures by using advanced 3D-printing technology.

    How did they figure this out?

    The team made a larger version of brochosomes, about 20,000 nanometers in size, using advanced 3D printing. They carefully copied the shape, structure, and pore arrangement of these particles to study them closely.

    Using a Micro-Fourier transform infrared (FTIR) spectrometer, they examined how brochosomes interact with different types of infrared light. This helped them understand how these particles manipulate light.

    In the future, the researchers said they have planned to improve the production process of synthetic brochosomes to match the size of natural ones more closely. They also aim to explore other uses for synthetic brochosomes, like in encryption systems where data can only be seen under specific light conditions.

    Replicating intricate brochosomes geometry

    The key to unlocking the potential of brochosomes lies in their precise geometry. Despite being known for decades, replicating brochosomes in the lab has been a tough challenge due to their intricate structure.

    Wang’s team overcame this hurdle using two-photon polymerization 3D printing method, producing synthetic brochosomes with remarkable optical properties. These faux brochosomes, while larger in scale, closely mimic the size and morphology of their natural counterparts.

    A common backyard insect inspires innovative device design

    Leafhopper and its brochosomes. (A) An optical image of a leafhopper Gyponana serpenta. (B) A scanning electron microscopy (SEM) image of the leafhopper wing (highlighted area in panel A). (C and D) SEM images of brochosomes on the leafhopper wing, revealing their hollow buckyball-like geometry. (E) An SEM image showing the cross-section of a natural brochosome cleaved by the focused ion beam (FIB) technique. (F) The relationship between the diameter of brochosome through-holes and the diameter of brochosomes across different leafhopper species. Brochosome diameter and hole diameter were determined from our experimental measurements and a literature source (18). The fitted dashed line indicates that the through-hole diameters are approximately 28% of the corresponding brochosome diameters. Description/Image Credit: pnas.org

    The consistency in brochosome geometry across leafhopper species is particularly intriguing. Regardless of the insect’s body size, brochosomes maintain a uniform diameter and pore size. This uniformity suggests an evolutionary advantage, enabling leafhoppers to effectively manipulate light to evade predators. By absorbing UV light and scattering visible light, brochosomes create an anti-reflective shield, reducing the insect’s visibility to UV-sensitive predators like birds and reptiles.

    Moreover, the densely packed arrangement of brochosomes on leafhopper wings further enhances their anti-reflective properties. Through careful experimentation and analysis, the researchers demonstrated how brochosomes minimize light reflection through both Mie scattering and through-hole absorption effects. These findings provide a physical basis for understanding leafhopper behavior and evolution.

    Importance of this approach

    The implications of this discovery are far-reaching, according to the researchers. Mimicking nature’s design, bioinspired optical materials could revolutionize various fields, from invisible cloaking devices to more efficient solar energy harvesting.

    Lin Wang, the lead author of the study, highlights the potential for thermal invisibility cloaks based on leafhopper-inspired technology. By regulating light reflection, these devices could obscure thermal signatures, offering applications in military stealth or even consumer products.

    “Nature has been a good teacher for scientists to develop novel advanced materials,” Wang said. “In this study, we have just focused on one insect species, but there are many more amazing insects out there that are waiting for material scientists to study, and they may be able to help us solve various engineering problems. They are not just bugs; they are inspirations.”

    Stealth tech takes inspiration from backyard insect for invisibility innovation

    Inspired by leafhoppers, common insects found in backyards, researchers have started to develop a new generation of invisibility devices. Early this year, Chinese scientists from Zhejiang University introduced a game-changing technology called the ‘Guardian of Drone’: an intelligent aero amphibious invisibility cloak.

    A common backyard insect inspires innovative device design
    Credit: Zhejiang University

    As reported in Advanced Photonics in January 12, this drone smoothly integrates perception, decision-making, and execution functionalities. The key breakthrough lies in the manipulation of tunable metasurfaces, enabling precise control over scattering patterns across various spatial and frequency domains through spatiotemporal modulation.


    Still there are challenges to overcome in increasing the production of synthetic brochosomes and exploring their further applications. Their future research will focus on improving how we make them and finding new ways to use them, Wang said.

  • Two AIs talk to each other first time in a purely linguistic way

    Two AIs talk to each other first time in a purely linguistic way

    It was a long-standing challenge to teach artificial intelligence to comprehend and execute tasks solely through verbal or written instructions. A groundbreaking discovery has now emerged from researchers at the University of Geneva, who published their findings in Nature Neuroscience on Monday. The paper has detailed an unprecedented AI model that not only excels at tasks but can also communicate with another AI in a purely linguistic manner, enabling the latter to replicate the tasks.

    Humans have a special talent for learning new things just by hearing or reading about them, and then explaining them to others. This ability sets us apart from animals, which usually need lots of practice and can’t pass on what they’ve learned.

    In the world of computers, there’s a field called Natural Language Processing that tries to copy this human skill. It aims to make machines understand and respond to spoken or written words. This technology uses artificial neural networks, which are like simplified versions of the connections between neurons in our brains.

    But, even though we’ve made progress, we still don’t fully understand all the complicated brain processes involved. So while computers can understand language to some extent, they’re not quite as good as humans at grasping all the intricacies.

    So, teaching AI to understand human language and do tasks was really tough before this. But the UNIGE team has now created an AI model using something called artificial neural networks, which act like the brain’s neurons. This AI learned to do simple jobs like finding things or reacting to what it sees, and then it explained those tasks in words to another AI.

    Two AIs talk to each other first time in a purely linguistic way

    a,b, Illustrations of example trials as they might appear in a laboratory setting. The trial is instructed, then stimuli are presented with different angles and strengths of contrast. The agent must then respond with the proper angle during the response period. a, An example AntiDM trial where the agent must respond to the angle presented with the least intensity. b, An example COMP1 trial where the agent must respond to the first angle if it is presented with higher intensity than the second angle otherwise repress response. c, Diagram of model inputs and outputs. Sensory inputs (fixation unit, modality 1, modality 2) are shown in red and model outputs (fixation output, motor output) are shown in green. Models also receive a rule vector (blue) or the embedding that results from passing task instructions through a pretrained language model (gray). A list of models tested is provided in the inset. Description/Image Credit: nature.com

    Dr. Alexandre Pouget, a professor at UNIGE’s Faculty of Medicine, said this was a big deal because while AI can understand and make text or images, it’s not good at turning words into actions.

    ”Currently, conversational agents using AI are capable of integrating linguistic information to produce text or an image. But, as far as we know, they are not yet capable of translating a verbal or written instruction into a sensorimotor action, and even less explaining it to another artificial intelligence so that it can reproduce it,” Pouget said.

    The AI model they built has a complex network of artificial neurons that mimic parts of the brain responsible for language.

    ”We started with an existing model of artificial neurons, S-Bert, which has 300 million neurons and is pre-trained to understand language. We ‘connected’ it to another, simpler network of a few thousand neurons,” explains Reidar Riveland, a PhD student in the Department of Basic Neurosciences at the UNIGE Faculty of Medicine, and first author of the study.

    In the experiment’s initial phase, the researchers trained the AI to mimic Wernicke’s area, responsible for language comprehension. Then, they moved to the next stage, where the AI was taught to replicate Broca’s area, aiding in speech production.

    Remarkably, all of this was done using standard laptop computers. The AI received written instructions in English, such as indicating directions or identifying brighter objects.

    Once the AI mastered these tasks, it could explain them to another AI, effectively teaching it the tasks. This was the first instance of two AIs conversing solely through language, according to the researchers.

    ”Once these tasks had been learned, the network was able to describe them to a second network — a copy of the first – so that it could reproduce them. To our knowledge, this is the first time that two AIs have been able to talk to each other in a purely linguistic way,” says Alexandre Pouget, who led the research.

    This breakthrough could have a huge impact on robotics, according to Dr. Pouget. Imagine robots that can understand and talk to each other, making them incredibly useful in factories or hospitals.

    Dr. Pouget believes this could lead to a future where machines work together with humans in ways we’ve never seen before, making things faster and more efficient.

  • UAE has adopted cloud seeding technology as part of its efforts to augment rainfall

    UAE has adopted cloud seeding technology as part of its efforts to augment rainfall

    The United Arab Emirates (UAE) is planning to maximize cloud seeding technology to tackle water scarcity, especially considering the region’s dry climate.

    Back in the 1990s, the UAE started using cloud seeding, a method to make clouds produce more rain. Led by Sheikh Mansour Bin Zayed Al Nahyan, the UAE’s vice president, they invested up to $20 million by the early 2000s for cloud seeding research. Collaborating with renowned institutions like the National Center for Atmospheric Research in Colorado and NASA, the UAE set the stage for its cloud seeding program.

    Need of Cloud Seeding in the UAE

    This initiative is urgent because the region is vulnerable to climate change impacts, worsened by rising global temperatures. With an average rainfall of less than 200 millimeters annually, the UAE faces a sharp contrast to places like London and Singapore, where rain is much more plentiful. Additionally, scorching summer temperatures reaching up to 50 degrees Celsius make water scarcity even worse, especially for agriculture.

    According to the United Nations, by 2025, around 1.8 billion people worldwide will have serious water scarcity issues, with the Middle East standing out as one of the hardest-hit areas. So, the UAE’s decision to use cloud seeding technology is a proactive step to tackle water scarcity challenges head-on.

    At the core of this effort lies the National Center of Meteorology (NCM) in Abu Dhabi, which serves as the primary coordinator of cloud seeding activities. With a dedication of over 1,000 hours annually to cloud seeding, the NCM operates with a well-equipped infrastructure, featuring a network of advanced weather radars and more than 60 weather stations. This setup enables meteorologists to identify suitable clouds for seeding, ultimately amplifying rainfall.

    The process of cloud seeding involves specialized aircraft carrying hygroscopic flares loaded with salt components. Once the right clouds are identified, pilots release these flares, which prompt the formation of ice crystals or raindrops within the clouds. This leads the clouds to release more raindrops than they would naturally.

    Process of Cloud Seeding

    Cloud seeding is a process used to boost rainfall by encouraging clouds to produce more raindrops. At the National Center of Meteorology (NCM) in Abu Dhabi, experts keep a close eye on the weather to find clouds suitable for seeding. When the right clouds are identified, special airplanes equipped with flares loaded with salt are sent up.

    These flares, weighing about 1 kilogram each, are designed to burn slowly and release salt particles into the clouds. Once in the clouds, these particles help to create more raindrops. Unlike some older methods that might use potentially harmful substances, the UAE’s program uses natural salts, which are safer for the environment.

    The NCM is also working on new materials, like nano materials coated with titanium oxide, which could be even more effective. These materials are being tested to ensure they work well and are environmentally friendly. This shows the UAE’s commitment to finding innovative and sustainable solutions to water scarcity.

    Criticism, Cost and Practice

    While some critics have raised concerns about the ethics and environmental impact of cloud seeding, supporters emphasize its scientific basis. Notably, the UAE’s program avoids using silver iodide, a common seeding agent, due to environmental worries. Instead, they use natural salts, ensuring safety and environmental responsibility.

    “Our specialized aircraft(s) only use natural salts, and no harmful chemicals,” agencies reported NCM as saying.

    Last year, the NCM revealed plans to enhance and modernize the program by incorporating additional advanced cloud seeding aircraft into its fleet. The Wx-80 turboprop aircraft can hold more cloud-seeding materials and comes with advanced safety features and other systems, as mentioned by the organization last year.

    Prior to this change, the NCM primarily depended on Beechcraft KingAir C90 planes for their cloud seeding missions.

    Cloud seeding usually costs between $0.50 and $1.00 per acre, but it can vary based on factors like area size and seeding method.

    As of this time last year, the cloud seeding operation cost roughly Dh29,000 (US$8,000) for every flight hour, and on average, more than 900 hours of cloud seeding operations were done every year.

    China has the world’s biggest cloud seeding system in the world, firing silver iodide rockets into the sky to increase rainfall over dry regions, including Beijing.

  • UK government enhances fraud detection with advanced AI technology

    The UK government is stepping up its efforts to fight fraud using advanced artificial intelligence (AI) technology. They’ve fortified their main fraud-spotting tool, the Single Network Analytics Platform (SNAP), with some major upgrades.

    Now, SNAP can sniff out shady networks, activities, and users linked to organized crime or dodging sanctions.

    “Criminals should be aware that we’re putting technology on the front line to detect fraud and protect taxpayers’ money,” said Baroness Neville-Rolfe DBE CMG, Minister of State at the Cabinet Office.

    “Adding sanctions and debarment data to our AI fraud detection tool will help us identify organized networks stealing from the public purse.”

    What’s new in SNAP?

    Well, they’ve added these three fresh sets of data:

    • Info on 18,000 UK and US entities facing sanctions, including those slapped with penalties due to Russia’s invasion of Ukraine.
    • Details on 1,000 entities barred by the World Bank from bagging its contracts.
    • Records of 647,000 UK dormant companies, those not making any money.

    What prompted this attention?

    It’s about assisting public entities such as government departments in detecting more individuals attempting to fraudulently obtain public funds through questionable contracts, grants, or loans.

    SNAP, launched in 2023 through a £4 million collaboration with tech experts at Quantexa, is revolutionizing the fight against public sector fraud. And Minister Neville-Rolfe isn’t stopping there. She’s actively planning more initiatives to root out cunning fraudsters using AI, particularly those involved in practices like ‘phoenixing’ – where they repeatedly establish new companies to evade debts.

    Fraud in the UK

    Fraud is widespread in the UK, constituting over 40% of England and Wales’ crime, with 3.5 million incidents reported from April 2022 to March 2023. UK Finance reported £1.2 billion stolen through fraud in 2022, mostly starting online (78%).

    In the past year, while unauthorized fraud saw a decrease of less than one percent, authorized push payment (APP) fraud losses amounted to £485.2 million, notably in investment and purchase fraud cases.

    The government will keep upgrading the AI fraud detection tool with new data regularly, according to minister Neville-Rolfe.

    In addition, she announced that, in the next year, the government will start some projects using AI to find new ways to spot fraud.

  • GPT-powered humanoid figure 01 masters speaking and reasoning on the job

    GPT-powered humanoid figure 01 masters speaking and reasoning on the job

    A new breakthrough in artificial intelligence has been achieved through the collaboration of Figure and OpenAI. They’ve demonstrated the impressive abilities of their humanoid robot, Figure 01, in a groundbreaking video released on March 13.

    The progress made by Figure in building humanoid robots is truly impressive. Led by entrepreneur Brett Adcock, the company quickly gathered experts from top companies like Boston Dynamics, Tesla, Google DeepMind, and Archer Aviation. Their goal? To create the first general-purpose humanoid robot that’s commercially viable.

    The journey from idea to reality has been fast. By October, Figure 01 was already up and running, doing basic tasks on its own. By the end of the year, it could learn from watching and was ready to start working at BMW by mid-January.

    During a recent warehouse demonstration, we got a peek into what the future of robotics might look like. This demonstration happened at the same time as Figure announced some big news: they’ve successfully secured Series B funding and teamed up with OpenAI.

    Together, they’re working on creating advanced AI models designed specifically for humanoid robots.

    Adcock, an American technology entrepreneur and the founder and CEO of Figure AI, an AI startup working on a general-purpose humanoid robot, wrote on a social media platform that the collaboration aims to accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.

    Adcock shared important details in the post, explaining that Figure 01’s cameras send data to a smart system trained by OpenAI.

    At the same time, Figure’s own networks process images quickly. OpenAI’s work contributes to the robot’s ability to understand spoken commands. This capability ensures that the robot can act precisely in response to verbal instructions.

    Moreover, Adcock also made it clear that the demo wasn’t controlled remotely; instead, this demonstration showed that the robot can work on its own.

    The progress is really impressive, with Adcock aiming for global-scale operations where humanoid robots play a big role.

    They’ll be utilizing this investment to fast-track Figure’s plans for deploying humanoid robots commercially, Adcock writes in the post. These funds will be directed towards various aspects, including AI training, manufacturing, deploying more robots, expanding the engineering team, and pushing forward with commercial deployment efforts.

    Correction: An earlier version of this article incorrectly spelled the title of this article.