Category: News

  • Princeton’s AI revolutionizes fusion reactor performance

    Princeton’s AI revolutionizes fusion reactor performance

    Fusion energy holds immense promise. The goal is to harness the power of the stars to generate clean, limitless energy on Earth. However, achieving this requires overcoming significant challenges. Researchers at Princeton and the Princeton Plasma Physics Laboratory (PPPL) have achieved a breakthrough by using machine learning (ML) to control plasma edge bursts in fusion reactors; this advancement enhances reactor performance while preventing damage.

    Achieving a sustained fusion reaction is complex. It requires maintaining a plasma that is dense, hot, and confined long enough for fusion to occur. Yet, as researchers push plasma performance limits, new challenges arise. One major issue is energy bursts escaping from the edge of the plasma. These edge bursts impact performance and damage reactor components over time.

    The team has developed a machine learning method to suppress these harmful edge instabilities. They achieved this without sacrificing plasma performance. Their approach optimizes the system’s suppression response in real time, maintaining high performance without edge bursts at different fusion facilities.

    The researchers published their findings on May 11 in Nature Communications. They demonstrated their method’s success at the KSTAR tokamak in South Korea and the DIII-D tokamak in San Diego. Each facility has unique operating parameters, yet the machine learning approach achieved strong confinement and high fusion performance without harmful plasma edge bursts.

    According to research leader Egemen Kolemen, associate professor of mechanical and aerospace engineering at the Andlinger Center for Energy and the Environment, the team not only demonstrated that their approach could maintain a high-performing plasma without instabilities but also proved its effectiveness at two different facilities.

    “We demonstrated that our approach is not just effective – it’s versatile as well,” Kolemen confidently stated.

    High-confinement mode in fusion reactors is a promising approach. It involves a steep pressure gradient at the plasma’s edge, offering enhanced plasma confinement. However, this mode historically comes with instabilities at the plasma’s edge. Traditional methods to control these instabilities, like applying magnetic fields, often lead to lower performance.

    “We have a way to control these instabilities, but in turn,” said Kolemen, a staff research physicist at PPPL, but “we’ve had to sacrifice performance, which is one of the main motivations for operating in high-confinement mode.”

    The machine learning model developed by the Princeton-led team reduces computation time from tens of seconds to milliseconds. “With our machine learning surrogate model, we reduced the calculation time of a code that we wanted to use by orders of magnitude,” claimed Shousha, co-first author Ricardo Shousha, a postdoctoral researcher at PPPL and former graduate student in Kolemen’s group.

    This enables real-time optimization. The model monitors the plasma’s status and adjusts magnetic perturbations as needed. This balance between edge burst suppression and high fusion performance is achieved without sacrificing one for the other.

    Fusion reactors like KSTAR and DIII-D have shown that this machine learning approach is robust and versatile. The team is now refining their model for future reactors like ITER (Latin for “the way” and originally an acronym for “International Thermonuclear Experimental Reactor”), currently under construction in southern France. One area of focus is enhancing the model’s predictive capabilities to recognize precursors to harmful instabilities, avoiding edge bursts entirely.

    This ML approach represents a significant breakthrough in fusion energy research. It addresses one of the main challenges in developing fusion power as a clean energy resource. The ability to control plasma edge bursts in real time without compromising performance is a game-changer.

    “These machine learning approaches have unlocked new ways of approaching these well-known fusion challenges,” said Kolemen.

    Fusion reactors rely on maintaining a high-performing plasma. Traditional physics-based optimization methods are computationally intense and time-consuming. They can’t keep up with the millisecond changes in plasma behavior. This machine learning method overcomes that limitation.

    The Princeton team’s model uses a fully connected multi-layer perceptron (MLP) driven by nine inputs. These include the total plasma current, edge safety factor, and plasma elongation, among others. The outputs determine the coil current distribution across the top, middle, and bottom 3D coils.

    The researchers used simulations from 8490 KSTAR equilibria to train the model. This approach predicts the optimal 3D coil setup that minimizes core perturbations and ensures safe edge burst suppression. Real-time adaptability is crucial for achieving reliable edge burst suppression in reactors.

    Maintaining thermal and energetic particle confinements is essential for high fusion performance. However, undesired perturbed fields in the core region, caused by RMPs, affect fast ion confinement. The machine learning approach minimizes these negative impacts by enabling edge burst suppression with very low RMP amplitudes.

    Enhanced RMP-hysteresis and rotation increase observed in experiments offer promising aspects for future fusion devices. These improvements enable ELM suppression with minimal RMP amplitudes, reducing negative impacts on core confinements. This adaptive scheme makes achieving high fusion products in future devices more favorable.

    The team’s method has shown remarkable performance boosts. In the DIII-D tokamak, for instance, the method achieved over a 90% increase in performance from the initial standard ELM-suppressed state. This enhancement isn’t solely due to adaptive RMP control but also to self-consistent plasma rotation evolution.

    The team’s innovative integration of the ML algorithm with RMP control enables fully automated 3D-field optimization and ELM-free operation. This approach is compatible with plasma operation that satisfies ELM suppression conditions. It’s a robust strategy for achieving stable edge burst suppression in long-pulse scenarios.

    Future fusion reactors like ITER face challenges due to metallic walls, which can introduce core instabilities from impurity accumulation. Adaptive control can mitigate these issues by optimizing RMPs to reduce impurity accumulation while preserving high plasma confinement.

    Remaining features need enhancement to achieve fully adaptive RMP optimization over entire discharges in future devices. Current strategies rely on ELM detection during optimization, which isn’t ideal for fusion reactors. Identifying and responding to ELM precursors in real time is crucial for complete ELM-free optimization.

    Importantly, the breakthrough from Princeton’s machine learning approach lies in its ability to significantly improve fusion reactor performance while controlling edge bursts. This progress will help us move toward practical and economically sustainable fusion energy.

  • UAE has adopted cloud seeding technology as part of its efforts to augment rainfall

    UAE has adopted cloud seeding technology as part of its efforts to augment rainfall

    The United Arab Emirates (UAE) is planning to maximize cloud seeding technology to tackle water scarcity, especially considering the region’s dry climate.

    Back in the 1990s, the UAE started using cloud seeding, a method to make clouds produce more rain. Led by Sheikh Mansour Bin Zayed Al Nahyan, the UAE’s vice president, they invested up to $20 million by the early 2000s for cloud seeding research. Collaborating with renowned institutions like the National Center for Atmospheric Research in Colorado and NASA, the UAE set the stage for its cloud seeding program.

    Need of Cloud Seeding in the UAE

    This initiative is urgent because the region is vulnerable to climate change impacts, worsened by rising global temperatures. With an average rainfall of less than 200 millimeters annually, the UAE faces a sharp contrast to places like London and Singapore, where rain is much more plentiful. Additionally, scorching summer temperatures reaching up to 50 degrees Celsius make water scarcity even worse, especially for agriculture.

    According to the United Nations, by 2025, around 1.8 billion people worldwide will have serious water scarcity issues, with the Middle East standing out as one of the hardest-hit areas. So, the UAE’s decision to use cloud seeding technology is a proactive step to tackle water scarcity challenges head-on.

    At the core of this effort lies the National Center of Meteorology (NCM) in Abu Dhabi, which serves as the primary coordinator of cloud seeding activities. With a dedication of over 1,000 hours annually to cloud seeding, the NCM operates with a well-equipped infrastructure, featuring a network of advanced weather radars and more than 60 weather stations. This setup enables meteorologists to identify suitable clouds for seeding, ultimately amplifying rainfall.

    The process of cloud seeding involves specialized aircraft carrying hygroscopic flares loaded with salt components. Once the right clouds are identified, pilots release these flares, which prompt the formation of ice crystals or raindrops within the clouds. This leads the clouds to release more raindrops than they would naturally.

    Process of Cloud Seeding

    Cloud seeding is a process used to boost rainfall by encouraging clouds to produce more raindrops. At the National Center of Meteorology (NCM) in Abu Dhabi, experts keep a close eye on the weather to find clouds suitable for seeding. When the right clouds are identified, special airplanes equipped with flares loaded with salt are sent up.

    These flares, weighing about 1 kilogram each, are designed to burn slowly and release salt particles into the clouds. Once in the clouds, these particles help to create more raindrops. Unlike some older methods that might use potentially harmful substances, the UAE’s program uses natural salts, which are safer for the environment.

    The NCM is also working on new materials, like nano materials coated with titanium oxide, which could be even more effective. These materials are being tested to ensure they work well and are environmentally friendly. This shows the UAE’s commitment to finding innovative and sustainable solutions to water scarcity.

    Criticism, Cost and Practice

    While some critics have raised concerns about the ethics and environmental impact of cloud seeding, supporters emphasize its scientific basis. Notably, the UAE’s program avoids using silver iodide, a common seeding agent, due to environmental worries. Instead, they use natural salts, ensuring safety and environmental responsibility.

    “Our specialized aircraft(s) only use natural salts, and no harmful chemicals,” agencies reported NCM as saying.

    Last year, the NCM revealed plans to enhance and modernize the program by incorporating additional advanced cloud seeding aircraft into its fleet. The Wx-80 turboprop aircraft can hold more cloud-seeding materials and comes with advanced safety features and other systems, as mentioned by the organization last year.

    Prior to this change, the NCM primarily depended on Beechcraft KingAir C90 planes for their cloud seeding missions.

    Cloud seeding usually costs between $0.50 and $1.00 per acre, but it can vary based on factors like area size and seeding method.

    As of this time last year, the cloud seeding operation cost roughly Dh29,000 (US$8,000) for every flight hour, and on average, more than 900 hours of cloud seeding operations were done every year.

    China has the world’s biggest cloud seeding system in the world, firing silver iodide rockets into the sky to increase rainfall over dry regions, including Beijing.

  • UK government enhances fraud detection with advanced AI technology

    The UK government is stepping up its efforts to fight fraud using advanced artificial intelligence (AI) technology. They’ve fortified their main fraud-spotting tool, the Single Network Analytics Platform (SNAP), with some major upgrades.

    Now, SNAP can sniff out shady networks, activities, and users linked to organized crime or dodging sanctions.

    “Criminals should be aware that we’re putting technology on the front line to detect fraud and protect taxpayers’ money,” said Baroness Neville-Rolfe DBE CMG, Minister of State at the Cabinet Office.

    “Adding sanctions and debarment data to our AI fraud detection tool will help us identify organized networks stealing from the public purse.”

    What’s new in SNAP?

    Well, they’ve added these three fresh sets of data:

    • Info on 18,000 UK and US entities facing sanctions, including those slapped with penalties due to Russia’s invasion of Ukraine.
    • Details on 1,000 entities barred by the World Bank from bagging its contracts.
    • Records of 647,000 UK dormant companies, those not making any money.

    What prompted this attention?

    It’s about assisting public entities such as government departments in detecting more individuals attempting to fraudulently obtain public funds through questionable contracts, grants, or loans.

    SNAP, launched in 2023 through a £4 million collaboration with tech experts at Quantexa, is revolutionizing the fight against public sector fraud. And Minister Neville-Rolfe isn’t stopping there. She’s actively planning more initiatives to root out cunning fraudsters using AI, particularly those involved in practices like ‘phoenixing’ – where they repeatedly establish new companies to evade debts.

    Fraud in the UK

    Fraud is widespread in the UK, constituting over 40% of England and Wales’ crime, with 3.5 million incidents reported from April 2022 to March 2023. UK Finance reported £1.2 billion stolen through fraud in 2022, mostly starting online (78%).

    In the past year, while unauthorized fraud saw a decrease of less than one percent, authorized push payment (APP) fraud losses amounted to £485.2 million, notably in investment and purchase fraud cases.

    The government will keep upgrading the AI fraud detection tool with new data regularly, according to minister Neville-Rolfe.

    In addition, she announced that, in the next year, the government will start some projects using AI to find new ways to spot fraud.

  • GPT-powered humanoid figure 01 masters speaking and reasoning on the job

    GPT-powered humanoid figure 01 masters speaking and reasoning on the job

    A new breakthrough in artificial intelligence has been achieved through the collaboration of Figure and OpenAI. They’ve demonstrated the impressive abilities of their humanoid robot, Figure 01, in a groundbreaking video released on March 13.

    The progress made by Figure in building humanoid robots is truly impressive. Led by entrepreneur Brett Adcock, the company quickly gathered experts from top companies like Boston Dynamics, Tesla, Google DeepMind, and Archer Aviation. Their goal? To create the first general-purpose humanoid robot that’s commercially viable.

    The journey from idea to reality has been fast. By October, Figure 01 was already up and running, doing basic tasks on its own. By the end of the year, it could learn from watching and was ready to start working at BMW by mid-January.

    During a recent warehouse demonstration, we got a peek into what the future of robotics might look like. This demonstration happened at the same time as Figure announced some big news: they’ve successfully secured Series B funding and teamed up with OpenAI.

    Together, they’re working on creating advanced AI models designed specifically for humanoid robots.

    Adcock, an American technology entrepreneur and the founder and CEO of Figure AI, an AI startup working on a general-purpose humanoid robot, wrote on a social media platform that the collaboration aims to accelerate Figure’s commercial timeline by enhancing the capabilities of humanoid robots to process and reason from language.

    Adcock shared important details in the post, explaining that Figure 01’s cameras send data to a smart system trained by OpenAI.

    At the same time, Figure’s own networks process images quickly. OpenAI’s work contributes to the robot’s ability to understand spoken commands. This capability ensures that the robot can act precisely in response to verbal instructions.

    Moreover, Adcock also made it clear that the demo wasn’t controlled remotely; instead, this demonstration showed that the robot can work on its own.

    The progress is really impressive, with Adcock aiming for global-scale operations where humanoid robots play a big role.

    They’ll be utilizing this investment to fast-track Figure’s plans for deploying humanoid robots commercially, Adcock writes in the post. These funds will be directed towards various aspects, including AI training, manufacturing, deploying more robots, expanding the engineering team, and pushing forward with commercial deployment efforts.

    Correction: An earlier version of this article incorrectly spelled the title of this article.

  • A machine learning-assisted wearable sensing-actuation system could enable speech for individuals without vocal cords

    A machine learning-assisted wearable sensing-actuation system could enable speech for individuals without vocal cords

    Great news for those lacking vocal cords!

    Researchers have just come up with a new throat patch that empowers speech for individuals without the use of traditional vocal cords. This machine learning-assisted wearable sensing-actuation system, described in a study published in Nature Communications on 12 March, translates muscle movements into speech without the need for conventional vocal cord function.

    The new device is a flexible patch that attaches to the neck and can convert muscle movements into speech. Essentially, you can communicate without relying on your vocal cords!

    But here’s the really clever part: this patch not only senses throat movements associated with speech, but it also uses those movements to generate its own power. That means no need for batteries or charging!

    This incredible device has offered hope for individuals struggling with voice issues due to conditions like damaged or paralyzed vocal cords, such as those in recovery from throat cancer surgery.

    Lead researcher Jun Chen, from the University of California, Los Angeles, got the idea after experiencing vocal strain during several hours of lecturing sessions, as reported by Live Science. He then began to imagine a way to solve this problem, to make it possible for a person to speak without using their vocal cords, also known as “vocal folds.”

    Motivated by this idea, Chen and his team worked hard to create a flexible patch that could help people who cannot speak or are recovering from a temporary vocal issue.

    The patch, which sticks onto the neck, senses throat muscle movements related to speech and turns them into electricity. What’s impressive is it works without needing batteries, making it easy to use every day.

    Made of five thin layers, including soft, flexible silicon and tiny magnets inside silicon, this patch creates electrical signals when your throat muscles move. Then, a clever computer program turns these signals into speech.

    throat_patch_AI_voice

    A machine learning-assisted wearable sensing-actuation system could enable speech for individuals without vocal cords. Image Credit: Agencies/Jun Chen/University of California, LA

    In a demonstration of the innovative tech, eight individuals without speech difficulties tested an algorithm’s ability to translate electrical impulses from a patch into speech.

    The algorithm performed impressively, achieving around 95% accuracy in converting these impulses into understandable speech. Participants uttered phrases like “Merry Christmas” and “I hope your experiments are going well” while stationary, walking, and running.

    Ziyuan Che, the lead author of the study from the University of California, Los Angeles, reported to AFP that certain words, such as ‘make’ and the name ‘Mark,’ which involve similar movements of throat muscles, could pose challenges for the patch in distinguishing between them.

    “But those two words usually appear in a long sentence like ‘I am going to make dinner,’ or ‘How you doing Mark?’,” Che added.

    Furthermore, in separate tests, participants were asked to either speak the sentences aloud or silently articulate them. Results showed that the algorithm effectively interpreted muscle movements in both scenarios, consistently generating the correct waveforms.

    But, its testing was limited to just the eight people saying a few phrases, and it has yet to be tested in people with speech disorders, as Chen said.

    Another limitation, as Chen said, is that the current manufacturing process for the patch would need to be scaled up and made more efficient for a large number of patches to be produced.

    Nearly a third of people suffer at least one voice disorder in their lifetime, according to the study.

    This discovery, which is a simple, user-friendly option compared to current devices, could change how people with voice problems communicate.

    However, Che believed that more advanced algorithms would allow the patch to translate larynx muscle movements “without the need of pre-recording the voice signals”. And he also cautioned it would be years before the prototype could potentially be used by patients.

    As of December, approximately 1 in 5 Americans surveyed has reported experiencing a voice disorder.

    Create a Speaking Avatar here:

  • AI could pose ‘extinction-level’ threat to humans, US state department reports

    AI could pose ‘extinction-level’ threat to humans, US state department reports

    Whenever artificial intelligence comes up in conversation, we often talk about the risks it poses, like the possibility of it being used as a weapon, the difficulty of controlling really advanced AI and the danger of cyberattacks powered by AI.

    The nightmares finally appear to be turning into reality: a new report commissioned by the US State Department has cautioned that failure to intervene in the advancement of AI technologies could result in ‘catastrophic consequences‘.

    Based on discussions and interviews with over 200 experts, including industry leaders, cybersecurity researchers, and national security officials, the report from Gladstone AI, released this week, highlights serious national security risks related to AI. It warns that advanced AI systems have the potential to cause widespread devastation that could threaten humanity’s existence.

    Jeremie Harris, CEO and co-founder of Gladstone AI, said the ‘unprecedented level of access’ his team had to officials in the public and private sector led to the startling conclusions. Gladstone AI said it spoke to technical and leadership teams from ChatGPT owner OpenAI, Google DeepMind, Facebook parent Meta and Anthropic.

    “Along the way, we learned some sobering things,” Harris said in a video posted on Gladstone AI’s website announcing the report.

    “Behind the scenes, the safety and security situation in advanced AI seems pretty inadequate relative to the national security risks that AI may introduce fairly soon.”

    The report has identified two main dangers of AI: the risk of it being used as harmful weapons and the concern of losing control over it, which could have serious security consequences.

    “Though its effects have not yet been catastrophic owing to the limited capabilities of current AI systems, advanced AI has already been used to design malware, interfere in elections, and execute impersonation attacks,” the report reads.

    AI is already making a big economic impact, according Harris. Harris said it could help us cure diseases, make new scientific discoveries, and tackle challenges we once thought were impossible to overcome.

    The downside? “But it could also bring serious risks, including catastrophic risks, that we need to be aware of,” Harris cautioned.

    “And a growing body of evidence – including empirical research and analysis published in the world’s top AI conferences – suggests that above a certain threshold of capability, AIs could potentially become uncontrollable.”

    The report’s release has led to calls for immediate action from policymakers. The White House has emphasized President Biden’s executive order on AI as a crucial step in managing its risks.

    White House spokesperson Robyn Patterson said President Joe Biden’s executive order on AI is the “most significant action any government in the world has taken to seize the promise and manage the risks of artificial intelligence.”

    “The President and Vice President will continue to work with our international partners and urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies,” Patterson said.

    However, experts agree that more strict measures are needed to address AI’s potential threats.

    About a year ago, Geoffrey Hinton, known as the “Godfather of AI,” left his job at Google and raised concerns about the AI technology he helped create. Hinton has suggested that there’s a 10% chance AI could lead to human extinction within the next 30 years.

    Hinton and many other leaders in the AI field, along with academics, signed a statement last June stating that preventing AI-related extinction risks should be a global priority.

    Despite pouring billions of dollars into AI investments, business leaders are increasingly worried about these risks. In a survey conducted at the Yale CEO Summit last year, 42% of CEOs said AI could potentially harm humanity within the next five to ten years.

    Gladstone AI’s report highlighted warnings from notable figures like Elon Musk, Lina Khan, Chair of the Federal Trade Commission, and a former top executive at OpenAI about the existential risks of AI. Some employees in AI companies share similar concerns privately, according to Gladstone AI.

    Gladstone AI also revealed that experts at cutting-edge AI labs were asked to share their personal estimates of the chance that an AI incident could cause “global and irreversible effects” in 2024. Estimates varied from 4% to as high as 20%, though the report noted these estimates were informal and likely biased.

    The speed of AI development, particularly Artificial General Intelligence (AGI), which could match or surpass human learning abilities, is a significant unknown. The report notes that AGI is seen as the main risk driver and mentions public statements from organizations like OpenAI, Google DeepMind, Anthropic, and Nvidia, suggesting AGI could be achieved by 2028, although some experts believe it’s much further away.

    Disagreements over AGI timelines make it challenging to create policies and safeguards. There’s a risk that if AI technology develops slower than expected, regulations could potentially be harmful.

    Gladstone AI has strictly warned that the development of AGI and similar capabilities “would introduce catastrophic risks unlike any the United States has ever faced.” This could include scenarios like AI-powered cyberattacks crippling critical infrastructure or disinformation campaigns destabilizing society.

    In addition, the report also raises concerns about weaponized robotic applications, psychological manipulation, and AI systems seeking power and becoming adversarial to humans. Advanced AI systems might even resist being turned off to achieve their goals, the report suggests.

    The report strongly suggests establishing a new AI agency and implementing emergency regulations to limit the development of overly capable AI systems. Moreover, it calls for stricter controls on the computational power used to train AI models to prevent misuse.

    Irrespective of governmental discourse, AI’s takeover occurs through various channels—see how, in 2024, text brings forth lifelike speaking avatars.

    Your least desired outcome at this moment: AI breaking free from the machine.

  • xAI will open-source Grok this week, Elon Musk says

    Elon Musk tweeted on Monday that xAI, his artificial-intelligence startup, will be sharing its powerful chatbot, Grok, with the public this week.

    This development comes after the billionaire’s recent legal battle with OpenAI. Earlier this month, Musk sued ChatGPT-maker OpenAI and its CEO Sam Altman, saying they had given up the startup’s original mission to develop artificial intelligence for the benefit of humanity and not for profit.

    Musk showed his interest in open-source AI during a podcast chat with computer scientist Lex Fridman in November. Shortly after, his company released the AI model to a small user base.

    In December, xAI launched Grok, its rival to ChatGPT, for Premium+ subscribers of the X social media platform. The Tesla CEO aims to create an AI focused on seeking maximum truth with Grok.

    By open-sourcing Grok, xAI joins a growing trend among companies, including Meta and Mistral, in sharing their AI technologies with the public.

    It’s the age of AI. Check out this online platform where you can transform text into expressive speaking avatars!

    Click to Create One

  • Machine learning has given rise to automated AI scientists

    Machine learning has given rise to automated AI scientists

    The impact of large language models (LLMs), particularly transformer-based models like GPT-4, has been witnessed across various fields, such as chemistry, biology, and code generation. Recently, another noteworthy advancement has emerged: the creation of Coscientist, an artificial intelligence system driven by GPT-4, which autonomously designs, plans, and executes complex experiments across diverse scientific tasks.

    According to a study published in the journal Nature on December 20th, Coscientist excels in accelerating research, particularly in the optimization of reactions, presenting autonomous capabilities in experimental design and execution. The system integrates large language models with tools like internet and documentation search, code execution, and experimental automation.

    In a catalytic cross-coupling experiment aimed at synthesizing biphenyl through Suzuki-Miyaura and Sonogashira reactions, Coscientist displayed remarkable autonomous capabilities. Utilizing internet searches and data analysis, the system autonomously selected appropriate reactants, reagents, and catalysts from available resources.

    Results showed strict reasoning

    Coscientist consistently avoided errors in reactant selection (e.g., never choosing phenylboronic acid for the Sonogashira reaction). Varied preferences in selecting specific bases and coupling partners were observed across different experiments.

     Coscientist’s capabilities in chemical synthesis planning tasks.

    Machine learning's progress has given rise to automated AI scientists
    Figure/Descrip Credit: nature.com

    Interestingly, the system provided justifications for its choices, displaying its reasoning regarding reactivity and selectivity.

    Following its autonomous experimental design, Coscientist wrote a Python protocol for the liquid handler, specifying the necessary volumes for the reactions. Upon minor errors in protocol (e.g., incorrect heater-shaker module method name), Coscientist consulted documentation autonomously and rectified the protocol.

    Machine learning's progress has given rise to automated AI scientists
    a, A general reaction scheme from the flow synthesis dataset analysed in c and d. b, The mathematical expression used to calculate normalized advantage values. c, Comparison of the three approaches (GPT-4 with prior information, GPT-4 without prior information and GPT-3.5 without prior information) used to perform the optimization process. d, Derivatives of the NMA and normalized advantage values evaluated in c, left and centre panels. e, Reaction from the C–N cross-coupling dataset analysed in f and g. f, Comparison of two approaches using compound names and SMILES string as compound representations. g, Coscientist can reason about electronic properties of the compounds, even when those are represented as SMILES strings. DMSO, dimethyl sulfoxide.
    Figure/Description Credit: nature.com

    Revolutionizing research?

    The integration of LLMs like GPT-4 with scientific tools signifies a potential revolution in scientific research. These systems offer rapid problem-solving, autonomous experimentation, and advanced reasoning, indicating promising strides toward further scientific discovery and innovation.

    The responsible use of these systems is essential to cope with potential risks associated with their misuse. Ethical considerations and safety implications must be addressed as technology continues to advance.

  • Emerging AI-infused interior design trends & Microsoft teams’ tech transformation

    The impact of technology in interior design is in full swing. AI-driven tools are currently reshaping how spaces are envisioned and crafted. Microsoft Teams’ recent AI-driven features at Ignite 2023 have offered a glimpse into the future of workspace customization, balancing futuristic elements with pragmatic functionalities for everyday work environments.

    Emerging AI-infused design trends & Microsoft teams' tech transformation
    Microsoft Teams can now use AI to clean up your background for you. Image Credit: Microsoft

    Ignite 2023, Microsoft’s annual IT pro conference from November 15–16, has revealed Teams updates. Among these, AI-driven voice isolation and a “decorate your background” feature stand out. Voice isolation, reducing background noise and voices, rolls out in 2024. The “decorate your background” feature arrives in Teams Premium next year.

    Immersive spaces in Teams are coming, allowing avatars in 3D environments and activities like gaming or virtual marshmallow roasting. Microsoft Mesh for these spaces becomes available in January. These additions, however, might not be everyone’s cup of tea.

    Useful features include customizable emoji reactions, forwarding chats, and new IT management tools. Moreover, enhancements from the re-architected Teams app extend to web experiences, promising better performance and efficiency.

    AI’s influence isn’t limited to Microsoft. A surge in AI-powered interior design apps is evident, driven by startups like Reimagine Home and CollovGPT. These platforms offer AI-generated room improvements based on user inputs, attracting millions of visitors and intriguing real estate agents and furniture retailers.

    Meanwhile, the excitement around AI interior design apps comes with bugs and limitations. Glitches in beta software and AI’s learning curve plague these platforms. They often struggle with differentiation and accuracy in identifying items or generating designs. However, advancements like ControlNet have enhanced precision, enabling these tools to better adhere to original space parameters.

    For interior designers, AI opens doors with AI-powered design tools, VR/AR experiences, personalized recommendations, predictive analytics, and enhanced communication tools. These advancements are revolutionizing design creation and client engagement.

    In this regard, Microsoft Teams has taken a step forward. Its ‘decorate your background’ feature takes a unique spin, analyzing a user’s room and enhancing it virtually – eliminating clutter or adding foliage to spruce up the setting. However, these enhancements are slated for release in early 2024, with immersive spaces in Teams, utilizing the metaverse hype, available in January.

    Moreover, Teams also introduces pragmatic functionalities: customizable emoji reactions, improved chat forwarding, and tools for efficient IT management. Performance enhancements promise double the speed and reduced memory usage for web users on Edge and Chrome.

    In the context of AI’s increasing transformative role in interior design, it’s not without its hurdles. But the potential for efficiency gains and unique design concepts is significant.


    Are you incorporating AI in your design process? Share your experiences in the comments below.