Author: Britney Foster

  • AI-Powered PCs: Overhyped Trend or Emerging Reality?

    AI-Powered PCs: Overhyped Trend or Emerging Reality?

    Extensive discussions surround how artificial intelligence (AI) and machine learning (ML) are revolutionizing various sectors, including personal computing. The emergence of AI-powered PCs has generated considerable excitement. These systems offer improved performance and innovative capabilities.

    However, is this emerging trend genuinely innovative, or merely an exaggerated marketing tactic? This article will examine the current state of AI PCs, their real-world effects, and future outlook.

    AI-Powered PCs

    AI-powered PCs are designed to incorporate advanced neural processing units (NPUs) alongside traditional processors and graphics cards. The aim is to enable these machines to perform AI and machine learning tasks more efficiently, directly on the device rather than relying on cloud computing. This design holds the potential for improved performance, faster processing, and better energy efficiency for AI-related applications.

    NPUs are specialized processors intended to handle AI-specific tasks. While GPUs (graphics processing units) can also perform these tasks, NPUs are optimized for efficiency, making them ideal for laptops where power consumption and battery life are critical. However, as of now, NPUs are still in the developmental phase and cannot fully replace GPUs in all tasks.

    Intel and AMD have introduced NPUs in their latest processor lines, such as Intel’s Core Ultra and AMD’s Ryzen 8000G. These processors aim to enhance AI performance for applications like video calling effects, AI-driven document processing, and more. Nevertheless, these NPUs still lag behind the performance expectations set by industry standards. For instance, Intel’s current NPUs reportedly fall short of the 40 trillion operations per second (TOPS) required for optimal performance in certain AI tasks.

    Current Market Adoption and Performance

    The AI PC market is expanding, with significant shipments reported. According to Canalys, AI PCs accounted for 14% of all personal computer shipments in the second quarter of 2024.

    Apple leads this market with its M-series chips, which feature neural engines capable of performing AI tasks. Microsoft has also made strides with its Copilot+ AI PCs, integrating Qualcomm’s Snapdragon PC chips with NPUs.

    Despite these advances, the overall performance and utility of AI PCs remain mixed. Intel’s push into the AI PC market has seen some success, with the Core Ultra processors offering over 100 AI experiences. However, real-world applications and user benefits are still limited. The AI features in these PCs, such as improved multitasking and enhanced security, are in their early stages. For instance, the integration of AI into everyday tasks like email management and data analysis is promising but has yet to reach its full potential.

    Hurdles in AI PC Expansion

    Despite the hype, several significant challenges hinder the widespread adoption of AI PCs. One major issue is the lack of substantial integration between AI hardware and software.

    For instance, Microsoft’s Copilot, a key feature advertised with AI PCs, currently operates in the cloud rather than utilizing the local NPU hardware. This results in slower performance and less efficient task handling, undermining the benefits of having an NPU in the device.

    Moreover, the current software ecosystem does not fully utilize the capabilities of NPUs. Most AI applications are still designed to run on cloud servers, making the specialized hardware in AI PCs less impactful. This situation is further exacerbated by the slow pace at which developers are adopting and optimizing their applications for NPU technology.

    Another challenge is the high cost of AI PCs. As they incorporate cutting-edge technology, these machines are often priced higher than traditional PCs. This elevated pricing can be a barrier for many consumers, limiting the market for AI-powered devices.

    Future Prospects and Progress

    The future of AI PCs holds immense potential, yet it requires significant upgrades.

    Intel CEO Pat Gelsinger announced at Computex that by 2028, 80% of PCs are projected to be AI-driven, with Intel at the forefront, having already shipped over 8 million PCs featuring its Intel Core Ultra chip since December.

    According to Gartner’s late 2023 research report, more than 80% of enterprises are expected to adopt some form of generative AI by 2026.

    Many anticipate that the forthcoming release of Windows 12, expected in 2025, will play a crucial role in shaping the future of AI PCs. The release is expected to integrate AI capabilities more deeply, potentially unlocking new features and functionalities for AI PCs.

    This upgrade could address some of the current limitations and provide a clearer picture of the true potential of AI-powered devices.

    Intel and other chipmakers are also working on next-generation NPUs with higher performance metrics. They share the common objectives of overcoming current limitations and offering more valuable benefits for AI applications.

    As AI technology evolves, the role of NPUs in enhancing productivity and efficiency will become more evident as they become more powerful and integrated into everyday computing tasks.

  • Princeton’s AI revolutionizes fusion reactor performance

    Princeton’s AI revolutionizes fusion reactor performance

    Fusion energy holds immense promise. The goal is to harness the power of the stars to generate clean, limitless energy on Earth. However, achieving this requires overcoming significant challenges. Researchers at Princeton and the Princeton Plasma Physics Laboratory (PPPL) have achieved a breakthrough by using machine learning (ML) to control plasma edge bursts in fusion reactors; this advancement enhances reactor performance while preventing damage.

    Achieving a sustained fusion reaction is complex. It requires maintaining a plasma that is dense, hot, and confined long enough for fusion to occur. Yet, as researchers push plasma performance limits, new challenges arise. One major issue is energy bursts escaping from the edge of the plasma. These edge bursts impact performance and damage reactor components over time.

    The team has developed a machine learning method to suppress these harmful edge instabilities. They achieved this without sacrificing plasma performance. Their approach optimizes the system’s suppression response in real time, maintaining high performance without edge bursts at different fusion facilities.

    The researchers published their findings on May 11 in Nature Communications. They demonstrated their method’s success at the KSTAR tokamak in South Korea and the DIII-D tokamak in San Diego. Each facility has unique operating parameters, yet the machine learning approach achieved strong confinement and high fusion performance without harmful plasma edge bursts.

    According to research leader Egemen Kolemen, associate professor of mechanical and aerospace engineering at the Andlinger Center for Energy and the Environment, the team not only demonstrated that their approach could maintain a high-performing plasma without instabilities but also proved its effectiveness at two different facilities.

    “We demonstrated that our approach is not just effective – it’s versatile as well,” Kolemen confidently stated.

    High-confinement mode in fusion reactors is a promising approach. It involves a steep pressure gradient at the plasma’s edge, offering enhanced plasma confinement. However, this mode historically comes with instabilities at the plasma’s edge. Traditional methods to control these instabilities, like applying magnetic fields, often lead to lower performance.

    “We have a way to control these instabilities, but in turn,” said Kolemen, a staff research physicist at PPPL, but “we’ve had to sacrifice performance, which is one of the main motivations for operating in high-confinement mode.”

    The machine learning model developed by the Princeton-led team reduces computation time from tens of seconds to milliseconds. “With our machine learning surrogate model, we reduced the calculation time of a code that we wanted to use by orders of magnitude,” claimed Shousha, co-first author Ricardo Shousha, a postdoctoral researcher at PPPL and former graduate student in Kolemen’s group.

    This enables real-time optimization. The model monitors the plasma’s status and adjusts magnetic perturbations as needed. This balance between edge burst suppression and high fusion performance is achieved without sacrificing one for the other.

    Fusion reactors like KSTAR and DIII-D have shown that this machine learning approach is robust and versatile. The team is now refining their model for future reactors like ITER (Latin for “the way” and originally an acronym for “International Thermonuclear Experimental Reactor”), currently under construction in southern France. One area of focus is enhancing the model’s predictive capabilities to recognize precursors to harmful instabilities, avoiding edge bursts entirely.

    This ML approach represents a significant breakthrough in fusion energy research. It addresses one of the main challenges in developing fusion power as a clean energy resource. The ability to control plasma edge bursts in real time without compromising performance is a game-changer.

    “These machine learning approaches have unlocked new ways of approaching these well-known fusion challenges,” said Kolemen.

    Fusion reactors rely on maintaining a high-performing plasma. Traditional physics-based optimization methods are computationally intense and time-consuming. They can’t keep up with the millisecond changes in plasma behavior. This machine learning method overcomes that limitation.

    The Princeton team’s model uses a fully connected multi-layer perceptron (MLP) driven by nine inputs. These include the total plasma current, edge safety factor, and plasma elongation, among others. The outputs determine the coil current distribution across the top, middle, and bottom 3D coils.

    The researchers used simulations from 8490 KSTAR equilibria to train the model. This approach predicts the optimal 3D coil setup that minimizes core perturbations and ensures safe edge burst suppression. Real-time adaptability is crucial for achieving reliable edge burst suppression in reactors.

    Maintaining thermal and energetic particle confinements is essential for high fusion performance. However, undesired perturbed fields in the core region, caused by RMPs, affect fast ion confinement. The machine learning approach minimizes these negative impacts by enabling edge burst suppression with very low RMP amplitudes.

    Enhanced RMP-hysteresis and rotation increase observed in experiments offer promising aspects for future fusion devices. These improvements enable ELM suppression with minimal RMP amplitudes, reducing negative impacts on core confinements. This adaptive scheme makes achieving high fusion products in future devices more favorable.

    The team’s method has shown remarkable performance boosts. In the DIII-D tokamak, for instance, the method achieved over a 90% increase in performance from the initial standard ELM-suppressed state. This enhancement isn’t solely due to adaptive RMP control but also to self-consistent plasma rotation evolution.

    The team’s innovative integration of the ML algorithm with RMP control enables fully automated 3D-field optimization and ELM-free operation. This approach is compatible with plasma operation that satisfies ELM suppression conditions. It’s a robust strategy for achieving stable edge burst suppression in long-pulse scenarios.

    Future fusion reactors like ITER face challenges due to metallic walls, which can introduce core instabilities from impurity accumulation. Adaptive control can mitigate these issues by optimizing RMPs to reduce impurity accumulation while preserving high plasma confinement.

    Remaining features need enhancement to achieve fully adaptive RMP optimization over entire discharges in future devices. Current strategies rely on ELM detection during optimization, which isn’t ideal for fusion reactors. Identifying and responding to ELM precursors in real time is crucial for complete ELM-free optimization.

    Importantly, the breakthrough from Princeton’s machine learning approach lies in its ability to significantly improve fusion reactor performance while controlling edge bursts. This progress will help us move toward practical and economically sustainable fusion energy.

  • Large language models could revolutionize finance sector within two years

    Large language models could revolutionize finance sector within two years

    Artificial intelligence and machine learning are going to revolutionize how industries, like finance, have been operating for ages. For hundreds of years, finance has relied on manual data analysis and personal judgment for investment decisions and risk evaluations. Traditional banking, likewise, has always involved human-led customer service and paper-based transaction processing for financial operations like savings accounts, checking accounts and loans. In this context, new research has suggested that Large Language Models (LLMs) could revolutionize finance within the next two years by enhancing efficiency, detecting fraud, offering financial insights, and automating customer service.

    How LLMs work in finance?

    Large Language Models, like OpenAI’s GPT-4 and IBM’s Granite series, aren’t new. They’re trained on big sets of data to understand and generate natural language. In finance, LLMs are useful. They can quickly analyze lots of financial data, create clear text, and help with tasks like fraud detection and customer service.

    Large language models could revolutionize finance

    LLMs use deep learning, especially the transformer architecture, which is great for handling text. They have layers of neural networks that learn from lots of text data during training. This helps them predict the next word in a sentence based on the context. They’re very useful for tasks like risk assessment and investment research.

    To ensure accuracy and prevent errors, both of which are crucial for regulatory compliance and maintaining a positive reputation, LLMs can be fine-tuned. They’re not just good at understanding language; they can also help with things like code generation and sentiment analysis.

    In fact, the use of LLMs in finance has the power to change how financial services are carried out on a large scale. They automate tasks like creating financial reports, predicting market trends, and understanding investor sentiment. Professionals in finance have already depended on LLMs for jobs such as organizing notes, managing cybersecurity, and ensuring that rules are followed.

    What’s more, LLMs can also take on tasks usually done by people, like investment banking and developing strategies. This not only speeds up work but also encourages innovation in the field.

    Revolutionizing finance sector in two years?

    Yes, as research from the Alan Turing Institute has made a fact-based prediction: Large Language Models will revolutionize the finance industry in the next couple of years.

    Over half of the workshop participants (52%) are using LLMs to enhance their work in different areas, according to the research. From organizing meeting notes to bolstering cybersecurity and ensuring compliance, these models are proving beneficial. Almost a third of participants (29%) reported using LLMs to sharpen their critical thinking skills, while 16% said they were using them to solve difficult tasks more effectively.

    However, the research has also identified several challenges, particularly concerning compliance with regulations and ensuring the comprehensibility of AI systems. Financial institutions have strict regulations to adhere to, which can be difficult when dealing with complex AI systems. That’s why it’s important for finance professionals, regulators, and policymakers to work together and address these challenges directly, as highlighted by the researchers.

    The Alan Turing Institute’s findings recommend collaboration across the finance sector to share and develop knowledge about implementing and using Large Language Models (LLMs), particularly in relation to safety concerns. These models could bring people together and share knowledge about using LLMs in finance. But, the researchers also point toward the importance of addressing concerns about security and privacy with these open-source models, while also ensuring adherence to regulatory standards and privacy requirements.


    References:

  • Engineering household robots practically incorporating a little common sense

    Engineering household robots practically incorporating a little common sense

    Household robots are getting better at doing lots of different jobs around the house, like cleaning up messes and serving food. They learn by copying what people do, but sometimes they struggle when things don’t go exactly as planned.

    At MIT, scientists are working on an innovative idea to help robots deal with these unexpected situations better. They’ve come up with a way to mix the robot’s movements with what a smart computer program knows, called a large language model. This helps robots understand how to break tasks down into smaller steps and handle problems without needing a human to fix everything.

    Engineering household robots practically incorporating a little common sense
    In this collaged image, a robotic hand tries to scoop up red marbles and put them into another bowl while a researcher’s hand frequently disrupts it. The robot eventually succeeds. Image Credit: Jose-Luis Olivares, MIT. Stills courtesy of the researchers

    Yanwei Wang, a student at MIT, says, “We’re teaching robots to fix mistakes on their own, which is a big deal for making them better at their jobs.”

    In a study they’re going to talk about at a conference, the MIT team shows how this idea works using a simple task: scooping marbles from one bowl to another. By using the smart computer program, they can figure out the best steps for the robot to take. Then, they teach the robot how to recognize these steps and adjust if things go wrong.

    Instead of stopping or starting over, the robot can now keep going even if something doesn’t go as planned. This means robots could become really good at doing tough jobs around the house, even when things get messy.

    Wang adds, “Our idea can make robots learn to do tricky tasks without needing humans to step in every time something goes wrong. It’s a big step forward for household robots.”

    What if they get as smart as us

    Engineering household robots practically incorporating a little common sense

    The idea of household robots getting as smart as humans could be a game-changer. Common sense, the ability to handle everyday situations wisely, is a big part of how humans think. If robots could do this too, it would certainly change a lot of things.

    It’s enough just to think of robots fitting right into our daily lives, understanding what’s going on, and making decisions like we do. They’d know what we need, adjust to new situations, and do things in ways that keep us safe and happy. This wouldn’t just make things run smoother, but it would also make us trust robots more.

    On the other hand, some big questions prevail. As robots become more like us, they start to feel less like tools and more like companions, don’t they? That means we’ll expect them to act ethically, just like we do. So, we’ll need to make sure they’re programmed to always put people first.

    Any threat from common-sense capable robots?

    Not in the near future, but the risk factor remains high. In fact, emotionally aware robots are just around the corner, according to Wang. He’s confident that these robots will soon be able to fix their mistakes on their own, without humans needing to step in. Wang is excited about the progress being made in training robots using teleoperation data. This data is crucial for their special algorithm, which turns it into advanced behaviors. This means robots can handle tough jobs with ease, even when things get tricky.

    And there’s this factor to consider: jobs. If robots take over a lot of tasks, some people might lose their jobs. That means we’ll need to rethink how we work and acquire new skills to adapt to a reality where robots are integrated into our daily lives.

    In addition, the sad incident at a vegetable packaging plant in Goseong, South Korea on November 8, 2023, has demonstrated how risky it can be to use industrial robots at work.

    Engineering household robots practically incorporating a little common sense
    This photo shows the interior of a vegetable packaging plant after a deadly incident involving a worker was reported in Goseong on Nov. 8, 2023. Description/Photo Credit: SOUTH KOREA GYEONGSANGNAM-DO FIRE DEPARTMENT VIA AP

    Even though the robot wasn’t reported as super smart with any kind of common sense, it accidentally hurt a worker who was checking on it. The event reminds us of the potential threats posed by such innovations and underscores the importance of enforcing strict safety rules when using AI or robots in the workplace.

    Referrences:

    https://news.mit.edu/topic/machine-learning
    https://techxplore.com/news/2024-03-household-robots-common.html
    https://www.sciencedaily.com/releases/2024/03/240325172439.htm
    https://openreview.net/forum?id=qoHeuRAcSl
    https://iclr.cc/
    https://www.messecongress.at/lage/?lang=en

  • Machine ‘unlearning’ helps filter out copyrighted, violent content

    Machine ‘unlearning’ helps filter out copyrighted, violent content

    Even in the world of machine learning, forgetting things is just as challenging as it is for us humans. Especially for artificial intelligence programs trying to act like humans, they face a tough time dealing with things like copyrighted stuff and sensitive content.

    Researchers at The University of Texas at Austin have developed a solution called “machine unlearning” to address the burning issue related to the rapidly growing use (and misuse) of AI. This method is designed specifically for image-based generative AI systems. It helps these systems get rid of copyrighted or violent images without forgetting everything else they’ve learned. They’ve explained their findings in a paper on the arXiv preprint server.

    Machine unlearning in practice

    Machine unlearning is a new way to deliberately forget certain data from a model to meet strict rules. But so far, it has mostly been used for certain types of models, leaving out others like generative ones. Companies working on self-driving cars use machine unlearning to get rid of old or irrelevant data. This helps them keep improving their driving algorithms and adjust to new situations.

    Hospitals and health tech companies use the system to keep patient information safe. For example, if a patient wants to take their data out of studies or databases, machine unlearning makes sure it’s completely removed, following privacy rules.

    Likewise, streaming services and online platforms use machine unlearning to update user profiles. This means they can change recommendations based on what users like right now, instead of using old information.

    And in setups where machine learning happens across different devices, like the ones used by mobile device makers and app developers, machine unlearning is used to delete data from devices. This way, when a user removes data from their device, it’s also taken out of the overall learning model without having to start over from scratch.

    Their experiments now show that their new method works well even without the data it is supposed to remember, which fits with rules about keeping data. According to the researchers, this is the first time anyone has really looked deeply into how to unlearn things from generative models that work with images, covering both theory and real-world tests.

    How machine unlearning works

    AI models are trained on large sets of data, some unwanted things gets in there too. In the past, the only choice was to start over and carefully take out the problematic stuff. But the researchers claim that their new method has offered a better solution.

    “When these models are trained on vast datasets, it’s inevitable that some undesirable data creeps in. Previously, the only recourse was to start over, painstakingly removing problematic content. Our approach offers a more nuanced solution,” Professor Radu Marculescu from the Cockrell School of Engineering’s Chandra Family Department of Electrical and Computer Engineering, a key figure in this endeavor, said.

    Generative AI relies a lot on internet data, which is huge but also full of copyrighted stuff, private info, and inappropriate content. This was evident in a recent legal fight between The New York Times and OpenAI over using articles without permission to train AI chatbots.

    According to Guihong Li, a graduate research assistant on the project, adding protections against copyright issues and misuse of content in generative AI models is crucial for their commercial success.

    The research has mainly looked at image-to-image models, which change input images based on context. The new machine unlearning algorithm allows these models to get rid of flagged content without needing to start all over again, with human oversight providing an extra layer of supervision.

    While machine unlearning has mostly been used in classification models, using it in generative models, especially for image processing, is a new area, as pointed out by the researchers in their paper.

  • Anti-aging pills based on mitochondrial rejuvenation?

    Anti-aging pills based on mitochondrial rejuvenation?

    It’s natural to worry about the effects of aging on our bodies and minds. But there’s hope in recent scientific progress, especially in the area of anti-aging. Scientists have been working on pills that target mitochondria, the tiny powerhouses within our cells, to potentially slow down the aging process.

    Understanding aging and mitochondria

    Anti-aging pills based on mitochondrial rejuvenation?

    Aging involves a gradual decline in both our physical and mental abilities. One important part of this process is the mitochondria, often called the “powerhouse of the cell.” These tiny structures are in charge of making energy in the form of ATP through a process called oxidative phosphorylation. But as we get older, this process becomes less efficient. That means we make less energy and produce more harmful substances called reactive oxygen species (ROS), which can harm cells. Scientists call this the mitochondrial theory of aging.

    For instance, studies have found that in diseases like Alzheimer’s and Parkinson’s, which are linked to getting older, there’s a big drop in how well mitochondria work. So, figuring out how mitochondria are involved in aging is really important for finding ways to treat age-related diseases.

    The role of mitochondria in aging

    As we age, our mitochondria can start to malfunction. This happens because of things like DNA mutations and increased levels of harmful substances called reactive oxygen species. This damage to mitochondria can speed up the aging process. And our body’s defense system, called the immune system, gradually gets weaker. In scientific terms, this natural aging process is referred to as immunosenescence. One reason for this decline is that mitochondria don’t work well.

    Research has shown that as people age, their immune cells have fewer mitochondria. This means our immune system then might not work as effectively, which can make us more likely to get sick.

    Anti-aging pills based on mitochondrial rejuvenation?
    Mitochondrial damage associated molecular patterns (DAMPs). DAMPs derived from mitochondrial components may be released during cellular injury, apoptosis or necrosis. Once these mitochondrial components are released into the extracellular space, they can lead to the activation of innate and adaptive immune cells. The recognition of mitochondrial DAMPs involves toll-like receptors (TLR), formyl peptide receptors (FPR) and purigenic receptors (P2RX7). By binding their cognate ligands or by direct interaction (i.e., reactive oxygen species, ROS), intracellular signaling pathways such as NFkB and the NLRP3 inflammasome become activated resulting in a proinflammatory response. TLR4 = toll-like receptor 4, TLR9 = toll-like receptor 9, P2RX7 = purigenic receptor, FPR1 = formyl peptide receptor 1, NLRP3 = NLR Family Pyrin Domain Containing 3, fMet = N-formylmethionine, mtROS = mitochondrial reactive oxygen species, mtDNA = mitochondrial DNA, Tfam = transcription factor A, mitochondrial, RAGE = receptors for advanced glycation end-products, NFkB = nuclear factor kappa-light-chain-enhancer of activated B cells. Description/Image Credit: National Library of Medicine

    So, taking care of our mitochondria could be really important for keeping our immune system strong as we age. That’s where mitochondrial rejuvenation comes in. By boosting the function of our mitochondria, we might be able to improve our immune response, even as we get older.

    Mitochondrial rejuvenation as an anti-aging strategy

    Mitochondrial rejuvenation is an exciting area of research in anti-aging treatments. The idea behind this is to fix and improve how mitochondria work to fight the effects of getting older.

    Scientists, for example, are looking into certain substances that might boost how mitochondria function. One of these is Nicotinamide Adenine Dinucleotide (NAD+), which is important for making energy in mitochondria. As we get older, our bodies have less NAD+, which means our mitochondria work less effectively. Giving NAD+ or things that help make it has been proven to make mitochondria work better and improve overall health in older mice.

    Anti-aging pills based on mitochondrial rejuvenation?
    Successful NAD+ restoration requires a multitargeted strategy that simultaneously addresses the root causes of NAD+ decline. Therapies must reduce the excessive consumption of NAD+ with approaches such as CD38 inhibition and reduction of DNA damage, while improving the efficiency of NAD+ recycling by promoting upregulation of the rate-limiting salvage pathway enzyme NAMPT and inhibition of NNMT, an enzyme that promotes the removal of NAD+ breakdown products from the cell rather than recycling. Description/Image Credit: NLM

    Another possible idea would be to use antioxidants that target mitochondria. These substances can stop harmful chemicals made by mitochondria, which can damage cells and maybe slow down aging.

    Also, scientists are studying stem cells for mitochondrial rejuvenation.

    Anti-aging pills based on mitochondrial rejuvenation?
    Different ways and protective mechanisms of mesenchymal stem cell mitochondrial transfer to damaged cells. Pathways by which healthy mitochondria are transferred from stem cells to mitochondrial dysfunction receptor cells include TNT formation, release of extracellular vesicles, and mitochondrial extrusion. Exosomes may transfer organelle fragments (such as protein complexes of mitochondrial electron transfer chains), mtDNA and ribosomes. Miro, mitochondrial Rho-GTPase1; TNT, tunneling nanotube; Drp 1, dynamin-related protein 1. Description Credit: NLM; Image Credit: BioRender.com

    Stem cells can make more of themselves and turn into different types of cells, and their mitochondria seem to work better than those in other cells. So, treatments that involve putting stem cells or their mitochondria into older tissues and organs might make them younger again.

    These are just a few examples of what researchers are looking into to rejuvenate mitochondria. Even though we’re still early in this research, the future of anti-aging treatments looks bright with these ideas.

    The future of anti-aging pills

    Indeed, the outlook for anti-aging treatments focusing on mitochondria appears promising. For example, research indicates that using substances like Coenzyme Q10, an antioxidant supporting mitochondrial function, can enhance the health and lifespan of mice.

    Short-term plasticity in the motor cortex was not affected by age or CoQ10 supplementation. (a) Paired pulse ratios (PPRs) at various stimulus intervals (25, 50, 100, 200, and 500 ms) were comparable in the M1 region (young adult, n = 14 slices from 5 mice; middle-aged, n = 27 slices from 9 mice) and the M2 region (young adult, n = 9 slices from 4 mice; middle-aged, n = 24 slices from 10 mice; no significant difference by age). (b) CoQ10 supplementation did not alter the PPRs in the M1 or M2 regions of middle-aged mice compared to those of the age-matched controls (M1: middle-aged + CoQ10, n = 8 slices from 4 mice; middle-aged, n = 27 slices from 9 mice; M2: middle-aged + CoQ10, n = 12 slices from 5 mice; middle-aged, n = 24 slices from 10 mice; no significant differences with supplementation). The middle-aged control data in (b) are identical to those in (a). Values are expressed as the mean ± SEM of independent experimental groups. Statistical analyses were performed using two-way repeated-measures ANOVA. Description/Image Credit: nature.com

    Similarly, trials involving humans and supplements like Nicotinamide Riboside, a precursor of NAD+ crucial for mitochondrial energy production, have displayed potential advantages in bolstering mitochondrial health and decelerating aging.

    Anti-aging pills based on mitochondrial rejuvenation?
    NR promotes muscle satellite cell differentiation in the twins from the BMI-discordant pairs. (A) Muscle gene expression level of satellite cell marker PAX7 before versus after NR (n = 9 twin pairs/18 individuals). (B) Immunostaining of PAX7+ satellite cells in muscle cryosections before versus after NR in one representative study participant. PAX7 (red, satellite cells); Hoechst (blue, nuclei). Scale bars, 10 μm. (C) Muscle PAX7+ satellite cell quantification before versus after NR (n = 10 twin pairs/20 individuals). (D to G) Ratios of PAX7/MYOG (D), PAX3/MYOG (E), MYF5/MYOD (F), and MYMK/MYOD (G) mRNA expression in myoblasts before versus after NR (n = 3 twin pairs/6 individuals). Y axis is on a logarithmic scale. PAX3, paired box 3; MYF5, myogenic factor 5. Lines connect the pre- and post-values of each individual, with black denoting the leaner and red denoting the heavier cotwins. Fold change indicates the mean of the post-NR value divided by the pre-NR value. P values were calculated using paired Wilcoxon signed-rank test. Description/Image Credit: science.org

    In addition, technological advancements, such as the creation of nanocarriers for targeted delivery of drugs to mitochondria, are opening doors to more efficient therapies. These innovations have the potential to refine the precision and effectiveness of anti-aging treatments, offering hope not only for slowing down aging but even potentially reversing it.

    References:

    https://www.nature.com/articles/s41392-023-01343-5
    https://www.nature.com/articles/s43587-022-00340-7
    https://www.nature.com/articles/s41598-023-31510-1
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6627503/
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8773271/
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9512238/
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9519729/
    https://www.science.org/doi/10.1126/sciadv.add5163
    https://pubs.rsc.org/en/content/articlehtml/2024/ma/d3ma00629h

  • Thought-controlled smart homes for next-gen automation

    Thought-controlled smart homes for next-gen automation

    The human brain is amazing, with its complex network of nerves sending electrical signals that control everything we do and think. Lately, scientists and tech experts have come up with a really futuristic idea: smart homes that you can control just by using your thoughts. Imagine being able to turn on lights or change the thermostat without lifting a finger – just by thinking about it. This idea used to be something out of a sci-fi movie, but now it’s becoming real, all thanks to some really impressive advances in Brain-Computer Interface (BCI) technology.

    A new study, published by the Multidisciplinary Digital Publishing Institute on July 18, 2023, has introduced a novel method for automating smart homes using signals from the brain. Instead of using fancy gadgets, this method taps into how your brain works when you think about moving your hands. By picking up on these brain signals through electroencephalogram (EEG) readings, researchers have come up with a way to turn them into commands for controlling things in your home.

    How the BCI technology works

    Thought-controlled smart homes for next-gen automation
    Experimental setup: a personal computer connected to two devices through KNX, i.e., two light bulbs. Description/Image Credit: mdpi.com

    The BCI system uses EEG signals to automate smart homes. This system records EEG signals through harmless scalp electrodes, which are then made stronger and cleaned up. The signals are turned into digital data and prepared, including picking out important features using wavelet classifiers or Morphological Component Analysis (MCA) to spot blinking. Then, machine learning programs analyze these features to figure out what the user wants to do, turning it into commands that control household devices through a local server linked to the Internet of Things (IoT). This means even people with trouble moving can interact with their surroundings just by using their brain waves.

    This system mostly depends on motor imagery (MI) signals, where people imagine moving without actually moving their bodies. By using techniques called Regularized Common Spatial Pattern (RCSP) and Linear Discriminant Analysis (LDA), the EEG data linked to these imagined movements are studied and sorted. This allows users to control things just by thinking about them. Essentially, it creates a direct link between the brain and devices, bypassing the need for things like switches or remote controls.

    In the study, participants took part in motor imagery (MI) training sessions while their brain activity was recorded using the Emotiv EPOC X headset. They were strictly instructed to stay concentrated and attentive during the MI training session to ensure precise and trustworthy outcomes. They were then asked to imagine moving their left or right hand when shown visual cues on a screen. The brain signals recorded through EEG were then analyzed using various filtering and data processing techniques, including RCSP and LDA, to extract useful information.

    Devices talk to each other in smart homes

    Thought-controlled smart homes for next-gen automation
    EEG topographical distribution of subject A during the training phase. (a) Fixation cross 0 [s], (b) Arrow cue at 2.75 [s], (c) MI task starting at 4.25 [s], (d) MI task at 5.25 [s]. Description/Image Credit: mdpi.com

    One interesting aspect of this research is how it combines the Motor Imagery-based Brain-Computer Interface (MI-BCI) system with the KNX protocol, a standard way for devices to talk to each other in smart homes. This combination makes it easy to control things like lights using just your thoughts. The study also shows that it’s possible to control two devices at once and MI-BCI systems could make smart homes more accessible and efficient.

    MI-BCI systems pick up brain signals linked to imagined movements, which are then analyzed to understand what the user wants to do. The technical process involves capturing EEG signals, improving them, and extracting key features using algorithms like and LDA. These algorithms identify the imagined actions, which are then translated into commands to control devices.

    The KNX protocol, a widely used standard for both commercial and residential building automation, ensures smooth communication between different gadgets. With this setup, users can control multiple devices simultaneously, such as adjusting lights and temperature, just by thinking about it. In the study, the EMOTIV helmet was used to record EEG data, and OpenVibe was employed for signal processing. The results showed that the system effectively controlled two light bulbs, indicating its potential for handling more complex tasks in smart homes.

    Are thought-controlled smart homes a reality?

    Neuroheadset headset and its spatial configuration. (a) Emotiv EPOC X; (b) Electrodes configuration. Description/Image Credit: mdpi

    The results from the study show that the new method works well, with participants doing a good job during both practice and testing. The system works reliably in understanding what users want and carrying out their commands.

    While this research is a big step forward in thought-controlled smart home systems, this also means dealing with the changes in daily brain signals. This includes making BCI systems better at adapting to differences in EEG signals from person to person. These signals can vary because of things like stress, tiredness, or even our natural body rhythms. To tackle these challenges, researchers are looking into smart algorithms that can adjust to these changes, so the system works well all the time.

    Besides, making the training process easier is important for more people to use these systems. Right now, learning to use BCI systems takes a lot of time, which isn’t practical for many users. So, coming up with simpler ways to train people, maybe using machine learning to speed things up, is a big focus. This might mean making the setup process simpler or making training more fun with games.

    Controlling multiple devices at the same time is also a big goal. Although some progress has been made, it’s still tough for users to manage many appliances smoothly and accurately. Researchers say they are working on smarter BCI setups that can understand more complex commands, letting people control different devices in their smart homes all at once. Being able to control multiple devices together is key to making smart homes easy and efficient to use.

    Final words

    Indeed, there have already been several attempts in the development of thought-controlled smart homes. Take, for example, a project led by Eda Akman Aydin at Gazi University in Turkey. They built a system in 2015, enabling humans to use their thoughts to control various home devices like the TV, lights, and phone. This system also relied on an EEG cap to pick up brain signals called P300, which arise when someone intends to take action. These signals are then translated into commands that the smart home gadgets can carry out.

    We are now a decade ahead of Akman’s project. And the day when thought-controlled smart home systems come out of the lab and into real life could make life amazingly easier for everyone, from people with disabilities to regular users looking for more convenience. The potential uses are endless and only limited by our imagination.

  • Nvidia’s new text-to-3D model shows the pace of generative AI evolution

    Artificial intelligence is advancing very quickly, and Nvidia’s newest creation, the text-to-3D model LATTE3D, is a perfect example of this rapid progress in AI technology. Following the debut of its powerful Blackwell superchip, designed for training advanced AI models, at the NVIDIA GTC event held from March 18 to 21, Nvidia has introduced LATTE3D, a groundbreaking text-to-3D generative AI model.

    LATTE3D acts like a virtual 3D printer, transforming text prompts into detailed 3D objects and animals within seconds. What sets LATTE3D apart is its remarkable speed and quality. Unlike previous models, LATTE3D can generate intricate 3D shapes almost instantly on a single GPU, such as the NVIDIA RTX A6000, which was used for the NVIDIA Research demo.

    Credit: Nvidia

    This advancement means creators can now achieve near-real-time text-to-3D generation, revolutionizing how ideas are brought to life, according to Sanja Fidler, Nvidia’s vice president of AI research.

    “A year ago, it took an hour for AI models to generate 3D visuals of this quality – and the current state of the art is now around 10 to 12 seconds. We can now produce results an order of magnitude faster, putting near-real-time text-to-3D generation within reach for creators across industries,” ” Fidler said.

    The researchers said they trained LATTE3D on specific datasets like animals and everyday objects. But developers have the flexibility to use the same model architecture to train it on other types of data.

    For example, if LATTE3D is taught using a collection of 3D plant images, it could help a landscape designer by swiftly adding trees, bushes, and succulents to a digital garden design while working with a client. Likewise, if it’s trained on household items, the model could create objects to fill virtual home environments, assisting developers in preparing personal assistant robots for real-life tasks.

    To train LATTE3D, NVIDIA used its powerful A100 Tensor Core GPUs. Additionally, the model learned from a variety of text prompts generated by ChatGPT, enhancing its ability to understand different descriptions of 3D objects. For example, it can recognize that prompts about various dog breeds should all result in dog-like shapes.

    Nvidia's new text-to-3D model shows the pace of generative AI evolution
    3D dogs generated by the Nvidia LATTE3D AI model. Image Credit: Nvidia

    NVIDIA Research is comprised of hundreds of scientists and engineers worldwide, with teams dedicated to various fields including AI, computer graphics, computer vision, self-driving cars, and robotics.

    “This leap is huge. DreamFusion circa 2022 was slow and low quality, but kicked off this generative 3D revolution. Efforts like ATT3D (Amortized Text-to-3D Object Synthesis) chased speed at the cost of quality,” AI creator Bilawal Sidhu wrote on X (formerly Twitter).

  • Why the First Commercial Flying Car Must be Self-Driving

    Why the First Commercial Flying Car Must be Self-Driving

    In the 1980s, flying cars and robots driving vehicles were two of the most popular ambitions. These futuristic fantasies ensnared the collective imagination and fueled dreams of a world where technology integrated into everyday life, with sky being the limit. Fast forward to the present day, and while we may not have flying cars buzzing overhead or fully autonomous vehicles dominating the roads just yet, significant strides have been made in both arenas.

    While flying cars have long been a reality (beginning in the 1950s), the prospect of a commercially available flying car has always seemed too challenging. However, as we inch closer to realizing this dream, it’s imperative to consider the implications of introducing such technology into our transportation ecosystem. In particular, the question arises: shouldn’t the first commercial flying car be self-driving?

    At first glance, the idea of a self-driving flying car may seem like a natural progression in our expedition for convenience and efficiency. After all, autonomous technology has already begun to revolutionize traditional ground transportation and promise increased safety and reduced congestion.

    One of the main and somewhat debated reasons supporting self-driving flying cars is safety, with opinions differing among people. Some argue that autonomous systems, which are not influenced by human error or bias, could significantly decrease the chances of accidents in the sky. We’ll get deeper into this argument as the article progresses.

    The integration of self-driving technology could also democratize access to flying cars, making them more accessible to a wider range of consumers. Eliminating the need for specialized piloting skills, autonomous flying cars could become as ubiquitous as their ground-bound counterparts.

    Surprising as it may seem, though, up to 75% of people, as indicated by recent studies, incline towards driving their own vehicles rather than opting for the autonomous alternative.

    Now, unlike terrestrial vehicles, which operate within well-defined roadways and traffic patterns, flying cars would navigate a vastly more complex and unpredictable environment. Airspace is governed by a multitude of regulations, air traffic control protocols, and safety procedures, all of which would need to be integrated into autonomous systems.

    The consequences of failure in an airborne vehicle are inherently more severe than those on the ground. A malfunction or programming error could have fatal implications not only for the occupants of the flying car but also for those on the ground below. The stakes are undeniably higher when operating in three-dimensional space, requiring a level of reliability and redundancy that far exceeds current automotive standards.

    But again, looking from another, arguably more sensible angle, as flying cars operate in a complex and potentially hazardous environment, necessitating precise navigation and rapid decision-making would actually be key. Self-driving technology offers the promise of enhanced safety by mitigating human error and providing easy integration with existing aviation infrastructure.

    Now, the reason we’re advocating for the first commercial flying car to be self-driving is considering the necessity of a smooth transition from road traffic to air traffic. This also applies to other transitions, like the transition from human to robot workers in the workforce. In case of flying cars, autonomous technology ensures a smoother and more structured integration into the skies. With emotionless AI at the helm, everything becomes more balanced and structured. The potential for human error all but vanishes (except for ones in programming). With AI, air traffic will be super organized and balanced, you know, civilized.

    Despite the inherent challenges, the case for self-driving flying cars remains compelling, albeit with some caveats. Of course, the technology is probably not ready for widespread deployment today. However, ongoing progress in artificial intelligence-driven sensor technology and aviation systems may soon bridge the gap towards the debut of the first commercially available autonomous flying vehicle.

  • Reality Editing is More than VR or AR, Especially with AI

    Reality Editing is More than VR or AR, Especially with AI

    Reality editing is essentially a combination of a spectrum of technologies and approaches aimed at altering or enhancing human perception of reality. Reality editing is more than VR (a headset), AR (a headset) and even mixed reality, which is the visual blend of virtual and real worlds. While VR and AR dominate discussions, these technologies represent only a fraction of what reality editing constitutes. In recent years, especially with the initiation of AI era with the rise of Generative AI, research and development efforts have increasingly focused on synergizing AI with reality editing technologies.

    Conventional VR and AR

    VR typically involves wearing a head-mounted display to enter a fully immersive virtual environment, while AR overlays digital content onto the user’s view of the real world using devices like smartphones or smart glasses. These technologies have found real-world applications in education, architecture, and many more industries; even healthcare.

    Non-VR aspects and future additions to reality editing

    Here is a basic overview of non-VR facets of reality editing:

    Auditory Reality Editing: Sound plays a crucial role in shaping our perception of reality. Consider noise-canceling headphones—they edit out unwanted sounds, creating a personalized auditory environment. Future applications could involve enhancing natural sounds or even introducing entirely new auditory layers.

    Haptic Reality Editing: Haptic feedback is already a well-established VR integration. Our sense of touch profoundly influences how we perceive the world. Haptic feedback in VR controllers or wearables simulates physical sensations. You can feel the texture of a virtual sculpture or sense the warmth of a digital fireplace.

    Temporal Reality Editing: Time manipulation is a powerful tool. Think about rewinding a video or fast-forwarding through a lecture. In reality editing, we could alter the perception of time. You could relive cherished moments, and on the other hand, compress hours into minutes during a tedious task.

    Emotional Reality Editing: Emotions color our reality. Can we edit emotions? Perhaps. Future technologies might allow us to adjust emotional states. Imagine dialing down anxiety or enhancing feelings of joy.

    AI in Reality Editing

    The integration of AI into reality editing introduces capabilities beyond VR and AR experiences. AI algorithms can analyze user behavior, adapt content in real-time, generate dynamic narratives, and enhance sensory feedback, thereby creating more engaging and realistic virtual environments.

    Recent Advancements in AI-Enhanced Reality Editing

    Reality editing

    AI-Driven Content Generation: Recent research has focused on large language models, such as OpenAI’s GPT-4, to generate lifelike narratives and dialogue within virtual environments. In fact, there are intelligent NPCs in existence already, like NVIDIA ACE’s characters, Jin and Nova, who recently talked to each other about their digital reality possibly being an “elaborate cybernetic dream”; all based on NVIDIA’s Nemo LLM, generating a new conversation each time. Such AI systems can understand and respond to user input, allowing for more interactive storytelling experiences in VR.

    Enhanced Sensory Feedback: AI-powered haptic technology enable more realistic touch sensations in VR environments. The most recent haptic breakthrough, published on Nature Electronics, is this skin-integrated multimodal haptic interface for immersive tactile feedback. By integrating AI algorithms with haptic devices, developers can simulate textures, forces, and vibrations, enhancing the sense of presence and immersion for users.

    Neuroadaptive Interfaces: Research into brain-computer interfaces (BCIs) aims to directly interpret neural signals and translate them into actions within virtual environments. BCIs offer the potential for lifelike interaction and control in VR and AR applications by bypassing traditional input devices.

    Emotion Recognition: AI algorithms can analyze facial expressions, voice intonations, and physiological signals to infer users’ emotions in real-time. With emotion recognition capabilities, developers can customize user experiences, In fact, even to evoke specific emotional responses and enhance user engagement.

    Real-time Adaptation: AI algorithms are being developed to analyze user interactions and adapt virtual scenarios in real-time by tracking user behavior and preferences. This is already being employed in digital conscious characters in “The Matrix awakens“.

    Dynamic Object Interactions: Reinforcement learning algorithms allow virtual agents and objects to display behaviors that feel more natural and react intelligently to user input. This makes experiences that are not only more immersive but also more interactive achievable.

    Cross-reality Collaboration: AI allows for the collaboration between virtual and physical spaces, enabling applications such as mixed reality and remote assistance. Integrating AI-powered communication and interaction tools, users can interact with virtual objects and remote participants as if they were physically present, like in platforms such as Nvidia’s Omniverse.

    Future Directions

    The convergence of AI with reality editing is expected to drive further innovation and transformation across various industries. Future research directions may include:

    • Advancing AI algorithms for more sophisticated content generation and interaction in virtual environments.
    • Exploring new modalities for immersive sensory feedback, such as olfactory and gustatory stimuli.
    • Enhancing AI-powered virtual assistants and agents to provide personalized guidance and support within VR and AR applications.
    • Investigating the potential of AI-driven predictive analytics to anticipate user preferences and adapt virtual experiences proactively.