Month: March 2024

  • Large language models could revolutionize finance sector within two years

    Large language models could revolutionize finance sector within two years

    Artificial intelligence and machine learning are going to revolutionize how industries, like finance, have been operating for ages. For hundreds of years, finance has relied on manual data analysis and personal judgment for investment decisions and risk evaluations. Traditional banking, likewise, has always involved human-led customer service and paper-based transaction processing for financial operations like savings accounts, checking accounts and loans. In this context, new research has suggested that Large Language Models (LLMs) could revolutionize finance within the next two years by enhancing efficiency, detecting fraud, offering financial insights, and automating customer service.

    How LLMs work in finance?

    Large Language Models, like OpenAI’s GPT-4 and IBM’s Granite series, aren’t new. They’re trained on big sets of data to understand and generate natural language. In finance, LLMs are useful. They can quickly analyze lots of financial data, create clear text, and help with tasks like fraud detection and customer service.

    Large language models could revolutionize finance

    LLMs use deep learning, especially the transformer architecture, which is great for handling text. They have layers of neural networks that learn from lots of text data during training. This helps them predict the next word in a sentence based on the context. They’re very useful for tasks like risk assessment and investment research.

    To ensure accuracy and prevent errors, both of which are crucial for regulatory compliance and maintaining a positive reputation, LLMs can be fine-tuned. They’re not just good at understanding language; they can also help with things like code generation and sentiment analysis.

    In fact, the use of LLMs in finance has the power to change how financial services are carried out on a large scale. They automate tasks like creating financial reports, predicting market trends, and understanding investor sentiment. Professionals in finance have already depended on LLMs for jobs such as organizing notes, managing cybersecurity, and ensuring that rules are followed.

    What’s more, LLMs can also take on tasks usually done by people, like investment banking and developing strategies. This not only speeds up work but also encourages innovation in the field.

    Revolutionizing finance sector in two years?

    Yes, as research from the Alan Turing Institute has made a fact-based prediction: Large Language Models will revolutionize the finance industry in the next couple of years.

    Over half of the workshop participants (52%) are using LLMs to enhance their work in different areas, according to the research. From organizing meeting notes to bolstering cybersecurity and ensuring compliance, these models are proving beneficial. Almost a third of participants (29%) reported using LLMs to sharpen their critical thinking skills, while 16% said they were using them to solve difficult tasks more effectively.

    However, the research has also identified several challenges, particularly concerning compliance with regulations and ensuring the comprehensibility of AI systems. Financial institutions have strict regulations to adhere to, which can be difficult when dealing with complex AI systems. That’s why it’s important for finance professionals, regulators, and policymakers to work together and address these challenges directly, as highlighted by the researchers.

    The Alan Turing Institute’s findings recommend collaboration across the finance sector to share and develop knowledge about implementing and using Large Language Models (LLMs), particularly in relation to safety concerns. These models could bring people together and share knowledge about using LLMs in finance. But, the researchers also point toward the importance of addressing concerns about security and privacy with these open-source models, while also ensuring adherence to regulatory standards and privacy requirements.


    References:

  • Researchers use AI to make Belgian beer taste better

    Researchers use AI to make Belgian beer taste better

    Researchers at KU Leuven University have started a project to make Belgian beer even better using artificial intelligence. Led by Prof Kevin Verstrepen, they’re studying how we taste flavors to understand the different smells in beer.

    The researchers experimented with 250 Belgian beers of different kinds to see what makes them taste the way they do. They checked things like how much alcohol they have, how acidic they are, and what flavors are in them.

    “Tiny changes in the concentrations of chemicals can have a big impact, especially when multiple components start changing,” Prof. Kevin Verstrepen, of KU Leuven university, who led the research, said.

    Then they asked a group of 16 people to taste the beers and describe how they tasted. This took three years! At the same time, they collected 180,000 reviews of different beers from the online consumer review platform RateBeer and looked at what people said about these beers online.

    The researchers used all this information to make computer programs that can predict how a beer will taste based on what’s in it. They then used these predictions to make existing beers even better by adding certain flavors.

    “The AI models predict the chemical changes that could optimise a beer, but it is still up to brewers to make that happen starting from the recipe and brewing methods,” Prof. Verstrepen said.

    The results were amazing. People liked the beers even more, saying they tasted sweeter and fuller. But even with this new technology, Verstrepen reminds us that the skill of the brewers is still very important.

    AI’s changing a lot of things. But in the context of beer, it’s combining old traditions with new ideas to ensure Belgian beer stays as great as ever.

  • Engineering household robots practically incorporating a little common sense

    Engineering household robots practically incorporating a little common sense

    Household robots are getting better at doing lots of different jobs around the house, like cleaning up messes and serving food. They learn by copying what people do, but sometimes they struggle when things don’t go exactly as planned.

    At MIT, scientists are working on an innovative idea to help robots deal with these unexpected situations better. They’ve come up with a way to mix the robot’s movements with what a smart computer program knows, called a large language model. This helps robots understand how to break tasks down into smaller steps and handle problems without needing a human to fix everything.

    Engineering household robots practically incorporating a little common sense
    In this collaged image, a robotic hand tries to scoop up red marbles and put them into another bowl while a researcher’s hand frequently disrupts it. The robot eventually succeeds. Image Credit: Jose-Luis Olivares, MIT. Stills courtesy of the researchers

    Yanwei Wang, a student at MIT, says, “We’re teaching robots to fix mistakes on their own, which is a big deal for making them better at their jobs.”

    In a study they’re going to talk about at a conference, the MIT team shows how this idea works using a simple task: scooping marbles from one bowl to another. By using the smart computer program, they can figure out the best steps for the robot to take. Then, they teach the robot how to recognize these steps and adjust if things go wrong.

    Instead of stopping or starting over, the robot can now keep going even if something doesn’t go as planned. This means robots could become really good at doing tough jobs around the house, even when things get messy.

    Wang adds, “Our idea can make robots learn to do tricky tasks without needing humans to step in every time something goes wrong. It’s a big step forward for household robots.”

    What if they get as smart as us

    Engineering household robots practically incorporating a little common sense

    The idea of household robots getting as smart as humans could be a game-changer. Common sense, the ability to handle everyday situations wisely, is a big part of how humans think. If robots could do this too, it would certainly change a lot of things.

    It’s enough just to think of robots fitting right into our daily lives, understanding what’s going on, and making decisions like we do. They’d know what we need, adjust to new situations, and do things in ways that keep us safe and happy. This wouldn’t just make things run smoother, but it would also make us trust robots more.

    On the other hand, some big questions prevail. As robots become more like us, they start to feel less like tools and more like companions, don’t they? That means we’ll expect them to act ethically, just like we do. So, we’ll need to make sure they’re programmed to always put people first.

    Any threat from common-sense capable robots?

    Not in the near future, but the risk factor remains high. In fact, emotionally aware robots are just around the corner, according to Wang. He’s confident that these robots will soon be able to fix their mistakes on their own, without humans needing to step in. Wang is excited about the progress being made in training robots using teleoperation data. This data is crucial for their special algorithm, which turns it into advanced behaviors. This means robots can handle tough jobs with ease, even when things get tricky.

    And there’s this factor to consider: jobs. If robots take over a lot of tasks, some people might lose their jobs. That means we’ll need to rethink how we work and acquire new skills to adapt to a reality where robots are integrated into our daily lives.

    In addition, the sad incident at a vegetable packaging plant in Goseong, South Korea on November 8, 2023, has demonstrated how risky it can be to use industrial robots at work.

    Engineering household robots practically incorporating a little common sense
    This photo shows the interior of a vegetable packaging plant after a deadly incident involving a worker was reported in Goseong on Nov. 8, 2023. Description/Photo Credit: SOUTH KOREA GYEONGSANGNAM-DO FIRE DEPARTMENT VIA AP

    Even though the robot wasn’t reported as super smart with any kind of common sense, it accidentally hurt a worker who was checking on it. The event reminds us of the potential threats posed by such innovations and underscores the importance of enforcing strict safety rules when using AI or robots in the workplace.

    Referrences:

    https://news.mit.edu/topic/machine-learning
    https://techxplore.com/news/2024-03-household-robots-common.html
    https://www.sciencedaily.com/releases/2024/03/240325172439.htm
    https://openreview.net/forum?id=qoHeuRAcSl
    https://iclr.cc/
    https://www.messecongress.at/lage/?lang=en

  • Machine ‘unlearning’ helps filter out copyrighted, violent content

    Machine ‘unlearning’ helps filter out copyrighted, violent content

    Even in the world of machine learning, forgetting things is just as challenging as it is for us humans. Especially for artificial intelligence programs trying to act like humans, they face a tough time dealing with things like copyrighted stuff and sensitive content.

    Researchers at The University of Texas at Austin have developed a solution called “machine unlearning” to address the burning issue related to the rapidly growing use (and misuse) of AI. This method is designed specifically for image-based generative AI systems. It helps these systems get rid of copyrighted or violent images without forgetting everything else they’ve learned. They’ve explained their findings in a paper on the arXiv preprint server.

    Machine unlearning in practice

    Machine unlearning is a new way to deliberately forget certain data from a model to meet strict rules. But so far, it has mostly been used for certain types of models, leaving out others like generative ones. Companies working on self-driving cars use machine unlearning to get rid of old or irrelevant data. This helps them keep improving their driving algorithms and adjust to new situations.

    Hospitals and health tech companies use the system to keep patient information safe. For example, if a patient wants to take their data out of studies or databases, machine unlearning makes sure it’s completely removed, following privacy rules.

    Likewise, streaming services and online platforms use machine unlearning to update user profiles. This means they can change recommendations based on what users like right now, instead of using old information.

    And in setups where machine learning happens across different devices, like the ones used by mobile device makers and app developers, machine unlearning is used to delete data from devices. This way, when a user removes data from their device, it’s also taken out of the overall learning model without having to start over from scratch.

    Their experiments now show that their new method works well even without the data it is supposed to remember, which fits with rules about keeping data. According to the researchers, this is the first time anyone has really looked deeply into how to unlearn things from generative models that work with images, covering both theory and real-world tests.

    How machine unlearning works

    AI models are trained on large sets of data, some unwanted things gets in there too. In the past, the only choice was to start over and carefully take out the problematic stuff. But the researchers claim that their new method has offered a better solution.

    “When these models are trained on vast datasets, it’s inevitable that some undesirable data creeps in. Previously, the only recourse was to start over, painstakingly removing problematic content. Our approach offers a more nuanced solution,” Professor Radu Marculescu from the Cockrell School of Engineering’s Chandra Family Department of Electrical and Computer Engineering, a key figure in this endeavor, said.

    Generative AI relies a lot on internet data, which is huge but also full of copyrighted stuff, private info, and inappropriate content. This was evident in a recent legal fight between The New York Times and OpenAI over using articles without permission to train AI chatbots.

    According to Guihong Li, a graduate research assistant on the project, adding protections against copyright issues and misuse of content in generative AI models is crucial for their commercial success.

    The research has mainly looked at image-to-image models, which change input images based on context. The new machine unlearning algorithm allows these models to get rid of flagged content without needing to start all over again, with human oversight providing an extra layer of supervision.

    While machine unlearning has mostly been used in classification models, using it in generative models, especially for image processing, is a new area, as pointed out by the researchers in their paper.

  • Researchers introduce RAmBLA as a holistic approach to evaluating biomedical language models

    Researchers introduce RAmBLA as a holistic approach to evaluating biomedical language models

    In today’s advanced tech world, Large Language Models such as GPT-4 and LLaMA 2 lead the way in understanding complex medical terms. They give clear insights and provide accurate info based on evidence. These models are important for medical decisions, so it’s vital they’re reliable and precise. But as they’re used more in medicine, there’s a challenge: making sure they can handle tricky biomedical data without mistakes.

    To tackle this, we need a new way to evaluate them. Traditional methods focus on specific tasks, like spotting drug interactions, which isn’t enough for the wide-ranging needs of biomedical queries. Biomedical questions often involve pulling together lots of data and giving context-appropriate responses, so we need a more detailed evaluation.

    That’s where Reliability Assessment for Biomedical BLM Assistants comes in. Developed by researchers from Imperial College London and GSK.ai, RAmBLA aims to thoroughly check how dependable BLMs are in the biomedical field. It looks at important factors for real-world use, like handling different types of input, recalling info accurately, and giving responses that are correct and relevant. This all-around evaluation is a big step forward in making sure BLMs can be trusted helpers in biomedical research and healthcare, according to the researchers.

    “. . . we believe the aspects of LLM reliability highlighted in RAmBLA may serve as a useful starting point for developing applications for such use-cases,” the paper, authored by researchers including William James Bolton from Imperial College London, reads.

    RAmBLA framework emerges as a holistic approach to evaluating biomedical language models
    Screenshot Credit: arXiv.org

    What makes RAmBLA special is how it simulates real-life biomedical research situations to test BLMs. It gives them tasks that mimic the challenges of actual biomedical work, from understanding complex prompts to summarizing medical studies accurately. One key focus of RAmBLA’s testing is to reduce “hallucinations,” where models give believable but wrong info – a crucial thing to get right in medical settings.

    The study showed that bigger BLMs generally perform better across different tasks, especially in understanding similar meanings in biomedical questions. For example, GPT-4 scored an impressive 0.952 accuracy in answering open-ended biomedical questions. However, the study also found areas needing improvement, like reducing hallucinations and improving recall accuracy.

    “If they have insufficient knowledge or context information to answer a question, LLMs should refuse to answer,” the study report claims.

    Interestingly, the researchers discovered that bigger models were skilled at recognizing when to avoid answering irrelevant questions. On the other hand, smaller ones like Llama and Mistral struggled more, suggesting they require additional adjustments for better performance.

  • Anti-aging pills based on mitochondrial rejuvenation?

    Anti-aging pills based on mitochondrial rejuvenation?

    It’s natural to worry about the effects of aging on our bodies and minds. But there’s hope in recent scientific progress, especially in the area of anti-aging. Scientists have been working on pills that target mitochondria, the tiny powerhouses within our cells, to potentially slow down the aging process.

    Understanding aging and mitochondria

    Anti-aging pills based on mitochondrial rejuvenation?

    Aging involves a gradual decline in both our physical and mental abilities. One important part of this process is the mitochondria, often called the “powerhouse of the cell.” These tiny structures are in charge of making energy in the form of ATP through a process called oxidative phosphorylation. But as we get older, this process becomes less efficient. That means we make less energy and produce more harmful substances called reactive oxygen species (ROS), which can harm cells. Scientists call this the mitochondrial theory of aging.

    For instance, studies have found that in diseases like Alzheimer’s and Parkinson’s, which are linked to getting older, there’s a big drop in how well mitochondria work. So, figuring out how mitochondria are involved in aging is really important for finding ways to treat age-related diseases.

    The role of mitochondria in aging

    As we age, our mitochondria can start to malfunction. This happens because of things like DNA mutations and increased levels of harmful substances called reactive oxygen species. This damage to mitochondria can speed up the aging process. And our body’s defense system, called the immune system, gradually gets weaker. In scientific terms, this natural aging process is referred to as immunosenescence. One reason for this decline is that mitochondria don’t work well.

    Research has shown that as people age, their immune cells have fewer mitochondria. This means our immune system then might not work as effectively, which can make us more likely to get sick.

    Anti-aging pills based on mitochondrial rejuvenation?
    Mitochondrial damage associated molecular patterns (DAMPs). DAMPs derived from mitochondrial components may be released during cellular injury, apoptosis or necrosis. Once these mitochondrial components are released into the extracellular space, they can lead to the activation of innate and adaptive immune cells. The recognition of mitochondrial DAMPs involves toll-like receptors (TLR), formyl peptide receptors (FPR) and purigenic receptors (P2RX7). By binding their cognate ligands or by direct interaction (i.e., reactive oxygen species, ROS), intracellular signaling pathways such as NFkB and the NLRP3 inflammasome become activated resulting in a proinflammatory response. TLR4 = toll-like receptor 4, TLR9 = toll-like receptor 9, P2RX7 = purigenic receptor, FPR1 = formyl peptide receptor 1, NLRP3 = NLR Family Pyrin Domain Containing 3, fMet = N-formylmethionine, mtROS = mitochondrial reactive oxygen species, mtDNA = mitochondrial DNA, Tfam = transcription factor A, mitochondrial, RAGE = receptors for advanced glycation end-products, NFkB = nuclear factor kappa-light-chain-enhancer of activated B cells. Description/Image Credit: National Library of Medicine

    So, taking care of our mitochondria could be really important for keeping our immune system strong as we age. That’s where mitochondrial rejuvenation comes in. By boosting the function of our mitochondria, we might be able to improve our immune response, even as we get older.

    Mitochondrial rejuvenation as an anti-aging strategy

    Mitochondrial rejuvenation is an exciting area of research in anti-aging treatments. The idea behind this is to fix and improve how mitochondria work to fight the effects of getting older.

    Scientists, for example, are looking into certain substances that might boost how mitochondria function. One of these is Nicotinamide Adenine Dinucleotide (NAD+), which is important for making energy in mitochondria. As we get older, our bodies have less NAD+, which means our mitochondria work less effectively. Giving NAD+ or things that help make it has been proven to make mitochondria work better and improve overall health in older mice.

    Anti-aging pills based on mitochondrial rejuvenation?
    Successful NAD+ restoration requires a multitargeted strategy that simultaneously addresses the root causes of NAD+ decline. Therapies must reduce the excessive consumption of NAD+ with approaches such as CD38 inhibition and reduction of DNA damage, while improving the efficiency of NAD+ recycling by promoting upregulation of the rate-limiting salvage pathway enzyme NAMPT and inhibition of NNMT, an enzyme that promotes the removal of NAD+ breakdown products from the cell rather than recycling. Description/Image Credit: NLM

    Another possible idea would be to use antioxidants that target mitochondria. These substances can stop harmful chemicals made by mitochondria, which can damage cells and maybe slow down aging.

    Also, scientists are studying stem cells for mitochondrial rejuvenation.

    Anti-aging pills based on mitochondrial rejuvenation?
    Different ways and protective mechanisms of mesenchymal stem cell mitochondrial transfer to damaged cells. Pathways by which healthy mitochondria are transferred from stem cells to mitochondrial dysfunction receptor cells include TNT formation, release of extracellular vesicles, and mitochondrial extrusion. Exosomes may transfer organelle fragments (such as protein complexes of mitochondrial electron transfer chains), mtDNA and ribosomes. Miro, mitochondrial Rho-GTPase1; TNT, tunneling nanotube; Drp 1, dynamin-related protein 1. Description Credit: NLM; Image Credit: BioRender.com

    Stem cells can make more of themselves and turn into different types of cells, and their mitochondria seem to work better than those in other cells. So, treatments that involve putting stem cells or their mitochondria into older tissues and organs might make them younger again.

    These are just a few examples of what researchers are looking into to rejuvenate mitochondria. Even though we’re still early in this research, the future of anti-aging treatments looks bright with these ideas.

    The future of anti-aging pills

    Indeed, the outlook for anti-aging treatments focusing on mitochondria appears promising. For example, research indicates that using substances like Coenzyme Q10, an antioxidant supporting mitochondrial function, can enhance the health and lifespan of mice.

    Short-term plasticity in the motor cortex was not affected by age or CoQ10 supplementation. (a) Paired pulse ratios (PPRs) at various stimulus intervals (25, 50, 100, 200, and 500 ms) were comparable in the M1 region (young adult, n = 14 slices from 5 mice; middle-aged, n = 27 slices from 9 mice) and the M2 region (young adult, n = 9 slices from 4 mice; middle-aged, n = 24 slices from 10 mice; no significant difference by age). (b) CoQ10 supplementation did not alter the PPRs in the M1 or M2 regions of middle-aged mice compared to those of the age-matched controls (M1: middle-aged + CoQ10, n = 8 slices from 4 mice; middle-aged, n = 27 slices from 9 mice; M2: middle-aged + CoQ10, n = 12 slices from 5 mice; middle-aged, n = 24 slices from 10 mice; no significant differences with supplementation). The middle-aged control data in (b) are identical to those in (a). Values are expressed as the mean ± SEM of independent experimental groups. Statistical analyses were performed using two-way repeated-measures ANOVA. Description/Image Credit: nature.com

    Similarly, trials involving humans and supplements like Nicotinamide Riboside, a precursor of NAD+ crucial for mitochondrial energy production, have displayed potential advantages in bolstering mitochondrial health and decelerating aging.

    Anti-aging pills based on mitochondrial rejuvenation?
    NR promotes muscle satellite cell differentiation in the twins from the BMI-discordant pairs. (A) Muscle gene expression level of satellite cell marker PAX7 before versus after NR (n = 9 twin pairs/18 individuals). (B) Immunostaining of PAX7+ satellite cells in muscle cryosections before versus after NR in one representative study participant. PAX7 (red, satellite cells); Hoechst (blue, nuclei). Scale bars, 10 μm. (C) Muscle PAX7+ satellite cell quantification before versus after NR (n = 10 twin pairs/20 individuals). (D to G) Ratios of PAX7/MYOG (D), PAX3/MYOG (E), MYF5/MYOD (F), and MYMK/MYOD (G) mRNA expression in myoblasts before versus after NR (n = 3 twin pairs/6 individuals). Y axis is on a logarithmic scale. PAX3, paired box 3; MYF5, myogenic factor 5. Lines connect the pre- and post-values of each individual, with black denoting the leaner and red denoting the heavier cotwins. Fold change indicates the mean of the post-NR value divided by the pre-NR value. P values were calculated using paired Wilcoxon signed-rank test. Description/Image Credit: science.org

    In addition, technological advancements, such as the creation of nanocarriers for targeted delivery of drugs to mitochondria, are opening doors to more efficient therapies. These innovations have the potential to refine the precision and effectiveness of anti-aging treatments, offering hope not only for slowing down aging but even potentially reversing it.

    References:

    https://www.nature.com/articles/s41392-023-01343-5
    https://www.nature.com/articles/s43587-022-00340-7
    https://www.nature.com/articles/s41598-023-31510-1
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6627503/
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC8773271/
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9512238/
    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC9519729/
    https://www.science.org/doi/10.1126/sciadv.add5163
    https://pubs.rsc.org/en/content/articlehtml/2024/ma/d3ma00629h

  • Smart tattoos that monitor health metrics and vital signs

    Smart tattoos that monitor health metrics and vital signs

    Tattoo technology has evolved beyond mere self-expression, as it is increasingly recognized as a reliable tool for tracking health metrics and vital signs.

    Smart tattoos are becoming more advanced in tracking health signs, studies show. MIT scientists have made ink for smart tattoos that contains tiny particles. These particles can detect changes in pH levels, which can signal issues like dehydration or metabolic disorders. Also, new flexible electronics allow for smart tattoos that monitor muscle activity, helping with physical performance and recovery. These developments highlight how smart tattoos could change healthcare by monitoring vital health signs easily and without being invasive.

    These tattoos, acting like miniature labs on the skin, offer real-time insights into important health indicators like blood pressure, glucose levels, and hydration. By integrating biosensors and special materials, they combine biomedical science with tattoo artistry to revolutionize health monitoring.

    Smart tattoos that monitor health metrics and vital signs
    At his Imperial College London lab, Ali Yetisen demonstrates a stamp on his arm created using tattoo ink that glows under certain light. Description/Photo Credit: CNN

    The idea of a “Lab on the skin” allows for painless monitoring of bodily functions without invasive medical devices. Pioneers like Ali Yetisen from Imperial College London have developed smart tattoos such as the DermalAbyss, which change color in response to changes in bodily fluids, providing continuous updates on pH levels, sodium, and glucose.

    Furthermore, smart tattoos can address external health factors like UV exposure, as shown in Carson Bruns’s research on “solar freckles,” potentially aiding in skin cancer prevention. They also offer promise in cancer treatment by providing a less intrusive alternative to traditional radiation markers.

    Smart tattoos that monitor health metrics and vital signs
    Carson Bruns (Photo Credit: colorado.edu)

    In 2020, Carson Bruns, an assistant professor of mechanical engineering at the University of Colorado Boulder, contributed to a team that created the “solar freckle,” a light-sensitive tattoo. It appears in sunlight, signaling excessive UV exposure, and fades when sunscreen is applied or when out of sunlight. Bruns has received a prestigious National Science Foundation CAREER Award for research that investigates how the art of tattooing can incorporate the latest advances in nanotechnology to improve human health.

    Smart tattoos have big potential applications, from personalized healthcare to space exploration. Human trials are ongoing, indicating progress toward regulatory approval and widespread use. Compared to wearable devices, smart tattoos offer unmatched convenience and permanence, along with being unhackable and not requiring batteries.

  • Thought-controlled smart homes for next-gen automation

    Thought-controlled smart homes for next-gen automation

    The human brain is amazing, with its complex network of nerves sending electrical signals that control everything we do and think. Lately, scientists and tech experts have come up with a really futuristic idea: smart homes that you can control just by using your thoughts. Imagine being able to turn on lights or change the thermostat without lifting a finger – just by thinking about it. This idea used to be something out of a sci-fi movie, but now it’s becoming real, all thanks to some really impressive advances in Brain-Computer Interface (BCI) technology.

    A new study, published by the Multidisciplinary Digital Publishing Institute on July 18, 2023, has introduced a novel method for automating smart homes using signals from the brain. Instead of using fancy gadgets, this method taps into how your brain works when you think about moving your hands. By picking up on these brain signals through electroencephalogram (EEG) readings, researchers have come up with a way to turn them into commands for controlling things in your home.

    How the BCI technology works

    Thought-controlled smart homes for next-gen automation
    Experimental setup: a personal computer connected to two devices through KNX, i.e., two light bulbs. Description/Image Credit: mdpi.com

    The BCI system uses EEG signals to automate smart homes. This system records EEG signals through harmless scalp electrodes, which are then made stronger and cleaned up. The signals are turned into digital data and prepared, including picking out important features using wavelet classifiers or Morphological Component Analysis (MCA) to spot blinking. Then, machine learning programs analyze these features to figure out what the user wants to do, turning it into commands that control household devices through a local server linked to the Internet of Things (IoT). This means even people with trouble moving can interact with their surroundings just by using their brain waves.

    This system mostly depends on motor imagery (MI) signals, where people imagine moving without actually moving their bodies. By using techniques called Regularized Common Spatial Pattern (RCSP) and Linear Discriminant Analysis (LDA), the EEG data linked to these imagined movements are studied and sorted. This allows users to control things just by thinking about them. Essentially, it creates a direct link between the brain and devices, bypassing the need for things like switches or remote controls.

    In the study, participants took part in motor imagery (MI) training sessions while their brain activity was recorded using the Emotiv EPOC X headset. They were strictly instructed to stay concentrated and attentive during the MI training session to ensure precise and trustworthy outcomes. They were then asked to imagine moving their left or right hand when shown visual cues on a screen. The brain signals recorded through EEG were then analyzed using various filtering and data processing techniques, including RCSP and LDA, to extract useful information.

    Devices talk to each other in smart homes

    Thought-controlled smart homes for next-gen automation
    EEG topographical distribution of subject A during the training phase. (a) Fixation cross 0 [s], (b) Arrow cue at 2.75 [s], (c) MI task starting at 4.25 [s], (d) MI task at 5.25 [s]. Description/Image Credit: mdpi.com

    One interesting aspect of this research is how it combines the Motor Imagery-based Brain-Computer Interface (MI-BCI) system with the KNX protocol, a standard way for devices to talk to each other in smart homes. This combination makes it easy to control things like lights using just your thoughts. The study also shows that it’s possible to control two devices at once and MI-BCI systems could make smart homes more accessible and efficient.

    MI-BCI systems pick up brain signals linked to imagined movements, which are then analyzed to understand what the user wants to do. The technical process involves capturing EEG signals, improving them, and extracting key features using algorithms like and LDA. These algorithms identify the imagined actions, which are then translated into commands to control devices.

    The KNX protocol, a widely used standard for both commercial and residential building automation, ensures smooth communication between different gadgets. With this setup, users can control multiple devices simultaneously, such as adjusting lights and temperature, just by thinking about it. In the study, the EMOTIV helmet was used to record EEG data, and OpenVibe was employed for signal processing. The results showed that the system effectively controlled two light bulbs, indicating its potential for handling more complex tasks in smart homes.

    Are thought-controlled smart homes a reality?

    Neuroheadset headset and its spatial configuration. (a) Emotiv EPOC X; (b) Electrodes configuration. Description/Image Credit: mdpi

    The results from the study show that the new method works well, with participants doing a good job during both practice and testing. The system works reliably in understanding what users want and carrying out their commands.

    While this research is a big step forward in thought-controlled smart home systems, this also means dealing with the changes in daily brain signals. This includes making BCI systems better at adapting to differences in EEG signals from person to person. These signals can vary because of things like stress, tiredness, or even our natural body rhythms. To tackle these challenges, researchers are looking into smart algorithms that can adjust to these changes, so the system works well all the time.

    Besides, making the training process easier is important for more people to use these systems. Right now, learning to use BCI systems takes a lot of time, which isn’t practical for many users. So, coming up with simpler ways to train people, maybe using machine learning to speed things up, is a big focus. This might mean making the setup process simpler or making training more fun with games.

    Controlling multiple devices at the same time is also a big goal. Although some progress has been made, it’s still tough for users to manage many appliances smoothly and accurately. Researchers say they are working on smarter BCI setups that can understand more complex commands, letting people control different devices in their smart homes all at once. Being able to control multiple devices together is key to making smart homes easy and efficient to use.

    Final words

    Indeed, there have already been several attempts in the development of thought-controlled smart homes. Take, for example, a project led by Eda Akman Aydin at Gazi University in Turkey. They built a system in 2015, enabling humans to use their thoughts to control various home devices like the TV, lights, and phone. This system also relied on an EEG cap to pick up brain signals called P300, which arise when someone intends to take action. These signals are then translated into commands that the smart home gadgets can carry out.

    We are now a decade ahead of Akman’s project. And the day when thought-controlled smart home systems come out of the lab and into real life could make life amazingly easier for everyone, from people with disabilities to regular users looking for more convenience. The potential uses are endless and only limited by our imagination.

  • Nvidia’s new text-to-3D model shows the pace of generative AI evolution

    Artificial intelligence is advancing very quickly, and Nvidia’s newest creation, the text-to-3D model LATTE3D, is a perfect example of this rapid progress in AI technology. Following the debut of its powerful Blackwell superchip, designed for training advanced AI models, at the NVIDIA GTC event held from March 18 to 21, Nvidia has introduced LATTE3D, a groundbreaking text-to-3D generative AI model.

    LATTE3D acts like a virtual 3D printer, transforming text prompts into detailed 3D objects and animals within seconds. What sets LATTE3D apart is its remarkable speed and quality. Unlike previous models, LATTE3D can generate intricate 3D shapes almost instantly on a single GPU, such as the NVIDIA RTX A6000, which was used for the NVIDIA Research demo.

    Credit: Nvidia

    This advancement means creators can now achieve near-real-time text-to-3D generation, revolutionizing how ideas are brought to life, according to Sanja Fidler, Nvidia’s vice president of AI research.

    “A year ago, it took an hour for AI models to generate 3D visuals of this quality – and the current state of the art is now around 10 to 12 seconds. We can now produce results an order of magnitude faster, putting near-real-time text-to-3D generation within reach for creators across industries,” ” Fidler said.

    The researchers said they trained LATTE3D on specific datasets like animals and everyday objects. But developers have the flexibility to use the same model architecture to train it on other types of data.

    For example, if LATTE3D is taught using a collection of 3D plant images, it could help a landscape designer by swiftly adding trees, bushes, and succulents to a digital garden design while working with a client. Likewise, if it’s trained on household items, the model could create objects to fill virtual home environments, assisting developers in preparing personal assistant robots for real-life tasks.

    To train LATTE3D, NVIDIA used its powerful A100 Tensor Core GPUs. Additionally, the model learned from a variety of text prompts generated by ChatGPT, enhancing its ability to understand different descriptions of 3D objects. For example, it can recognize that prompts about various dog breeds should all result in dog-like shapes.

    Nvidia's new text-to-3D model shows the pace of generative AI evolution
    3D dogs generated by the Nvidia LATTE3D AI model. Image Credit: Nvidia

    NVIDIA Research is comprised of hundreds of scientists and engineers worldwide, with teams dedicated to various fields including AI, computer graphics, computer vision, self-driving cars, and robotics.

    “This leap is huge. DreamFusion circa 2022 was slow and low quality, but kicked off this generative 3D revolution. Efforts like ATT3D (Amortized Text-to-3D Object Synthesis) chased speed at the cost of quality,” AI creator Bilawal Sidhu wrote on X (formerly Twitter).

  • S24 Ultra’s S-Pen Smell is Real (The Problem isn’t your nose)

    S24 Ultra’s S-Pen Smell is Real (The Problem isn’t your nose)

    If your Samsung galaxy S24 Ultra’s S Pen smells like a burnt plastic, you are not alone. The S-Pen dilemma has sparked numerous discussions in online community forums, with users expressing various concerns. Some even went as far as to fear that the house was burning, before learning the truth.

    Samsung EU’s moderator, AndrewL, who has been a member of the community since 2018, officially stated the following:

    This isn’t anything to be concerned about. While the S Pen is in its holster, it is close to the internal components of the phone, which will generate heat while in use, and cause the plastic to heat up. This can smell like burning, but it is similar to the smell you might experience after leaving your car in the sun for a few hours. The seats and plastic fittings in the vehicle might smell hot, but this will diminish after it cools. 

    The S Pen was promoted as offering “the magic of touch-free control.” But who knew that alongside its 0.7mm pen tip and 4096 pressure sensors, it would also come with a bonus fragrance experience straight out of a sci-fi flick?

    Now that the discussion regarding S24 Ultra’s pen has come up, numerous users have also taken note of similar olfactory experiences with previous iterations of the S Pen.

    Amidst any laughter and jest, it’s essential to recognize the potential serious implications of such incidents. A burning smell can often signal overheating or malfunctioning electronic components. Regardless of whether or not you have an S Pen, it’s important to watch out for that.