Category: AI

  • Can LLMs generate better research ideas than humans? A critical analysis of creativity and feasibility

    Can LLMs generate better research ideas than humans? A critical analysis of creativity and feasibility

    Scientists have expected a lot from Large Language Models (LLMs), especially in terms of creativity and ideation. In a recent study, however, questions have emerged about whether their potential in generating innovative research ideas truly surpasses human creativity or remains constrained by practical limitations.

    Published on September 6 on arxiv.org, the large-scale human study conducted by Chenglei Si, Diyi Yang, and Tatsunori Hashimoto aimed to evaluate the potential of LLMs in the ideation process. The experiment involved over 100 natural language processing (NLP) researchers in generating and evaluating research ideas, comparing both human and LLM-generated outputs.

    While LLM-generated ideas showed greater novelty with statistical significance (p<0.05), the study revealed notable weaknesses, particularly regarding the feasibility and practical application of these ideas. One of the most remarkable results from the study was the clear superiority of LLMs in terms of novelty. LLM-generated ideas received an average novelty score of 5.64, and when reranked by human reviewers, the score rose slightly to 5.81, compared to 4.84 for human-generated ideas.

    The models, using techniques like retrieval-augmented generation (RAG), were able to sift through massive quantities of research papers, generating ideas with a level of novelty that human experts found difficult to match.

    But while novelty is a key element of creative research, feasibility remains just as important—if not more so—when assessing the practicality and real-world application of research ideas. In this regard, LLMs faltered. Feasibility scores for LLM-generated ideas were consistently lower than those for human-generated ones. And this has revealed a fundamental gap between LLMs’ ability to ideate and their capacity to envision practically implementable solutions. The tendency of LLMs to propose resource-intensive projects, such as fine-tuning large models like BLOOM, illustrates how AI-generated ideas, although creative, can face significant hurdles in real-world execution.

    This tension between novelty and feasibility propounds a critical question about the role LLMs should play in research. If LLMs can consistently outperform humans in terms of ideation, should they be integrated into the early stages of research development, leaving humans to refine and implement these ideas? Or do their feasibility shortcomings limit them to being mere ideation assistants?

    Another major challenge identified in the study is the issue of self-evaluation. LLMs, despite their ability to generate novel ideas, struggle to reliably evaluate their own outputs. Various evaluation methods, including pairwise ranking, revealed that LLMs exhibit a lower consistency in idea evaluation compared to human reviewers.

    The best-performing model, Claude-3.5, achieved an accuracy of just 53.3% in evaluating ideas, lower than human inter-reviewer consistency at 56.1%. This exposes the inherent difficulties in using LLMs as autonomous research agents capable of both generating and critically assessing ideas.

    Moreover, the issue of diversity in LLM-generated ideas cannot be overlooked. The study showed that although LLMs can generate a large number of ideas—up to 4,000 seed ideas per topic—only a small fraction were unique. Most were duplicates — this points to a bottleneck in the diversity of ideas.

    This lack of diversity could lead to a narrowing of perspectives in the research sector, a problem that could undermine the benefits of AI-generated creativity in the long term. As the authors of the study rightly suggest, refining LLM models or ideation methods will therefore be necessary to ensure more diverse thinking in future AI-generated research.

    The study has also accentuated the importance of human supervision in the LLM-driven research process. In the reranking of ideas, human reviewers consistently improved the outcomes, particularly in terms of novelty. This signifies that while LLMs are capable of generating novel ideas, their outputs can be significantly enhanced when combined with human expertise.

    This raises a broader question about the future of research: rather than asking whether LLMs can generate better research ideas than humans, perhaps we should be exploring how human-AI collaboration can elevate the research process to new heights.

    It is essential to recognize that while LLMs demonstrate remarkable capabilities, they are not yet capable of fully autonomous research. According to the findings, even when LLMs are integrated into the research pipeline—from paper retrieval to idea generation and evaluation—human intervention remains crucial at multiple stages.

    Expert researchers there provided critical input by reranking ideas and conducting qualitative reviews; they demarcated shortcomings such as the misuse of datasets and unrealistic assumptions in LLM-generated proposals. And without this human input, it’s clear that the feasibility and practicality of AI-generated research ideas would be severely compromised.

    Review bias and the subjectivity of idea evaluation

    An intriguing aspect of the study is the subjectivity involved in idea evaluation. Reviewing research ideas, especially those that are not yet fully developed into papers, presents inherent challenges.

    The study has reported an inter-reviewer agreement of just 56.1%, lower than the 66% found in NeurIPS 2021 reviewer consistency experiments and the 71.9% in ICLR 2024 submissions. This low level of agreement begets the subjective nature of evaluating raw research ideas, as opposed to fully executed projects.

    The subjective biases of human reviewers also raise concerns about the validity of the evaluation process. While AI-generated ideas were rated as more novel, the reviewers’ own expectations and preferences likely influenced these outcomes.

    Furthermore, the novelty of an idea does not guarantee its effectiveness or impact, especially in fields like NLP, where the execution of ideas is paramount. This subjectivity, coupled with the known biases of LLMs in evaluation tasks, implies that neither humans nor AI are fully equipped to handle the complexities of research ideation independently.

    Scaling LLM capabilities

    The study’s attempt to scale LLM capabilities using the over-generate-and-rank method has revealed significant diminishing returns. Of the 4,000 seed ideas generated per research topic, only 200 were unique. As the number of generated ideas increased, the percentage of non-duplicates dropped.

    This bottleneck in idea generation challenges the assumption that simply scaling LLM outputs will lead to better or more creative research ideas. Instead, it indicates that beyond a certain point, LLMs struggle to sustain creativity and diversity in their outputs; this has further accentuated the need for human input to guide and refine the ideation process.

    The ethical dilemma and impact on human creativity

    It’s almost certain that important ethical questions can arise with the increasing use of LLMs in research. As AI-generated ideas become more prevalent, there is a growing concern about the flood of low-quality academic submissions that may result from this trend.

    The ambiguity surrounding intellectual credit and the potential for AI-generated ideas to be misused for harmful applications add another layer of complexity to this issue. Transparency in the role of AI in research will be crucial to maintaining academic integrity, as will continued safety research to mitigate potential risks.

    Another ethical concern is the potential homogenization of ideas. As LLMs become more widespread, there is a risk that research outputs may become less diverse, with LLMs generating a narrower range of perspectives. Addressing this issue will require careful refinement of AI models and ideation methods to encourage more diverse and innovative thinking.

    Perhaps the most profound question raised by the study is the potential impact of LLMs on human creativity and collaboration. If AI-generated ideas are consistently rated as more novel, does this signal a decline in human creativity, or does it simply reflect a new paradigm in which AI augments human capabilities?

    The authors of the study argue that over-reliance on AI could stifle human creativity and social collaboration, which obviously have long been central to the research process.

    However, rather than viewing AI as a replacement for human researchers, it may be more productive to see LLMs as tools that can enhance human creativity. By automating certain aspects of research ideation and evaluation, LLMs can free up time for researchers to focus on more complex and subtle tasks.

    However, only by maintaining a balance between utilizing AI’s strengths and preserving the uniquely human elements of creativity and collaboration will true output come in scientific discoveries.

  • Artificial Super Intelligence: Transcending Imagination

    Artificial Super Intelligence: Transcending Imagination

    With the rapid evolution of artificial intelligence, transitioning from basic algorithms with specialized rules to deep learning models featuring groundbreaking generative capabilities, one might wonder what the pinnacle of this technology could be.

    At the final stage of the three fundamental phases of AI—Artificial Narrow Intelligence, Artificial General Intelligence, and Artificial Super Intelligence—lies an intelligence that surpasses human capabilities in every conceivable way, ASI. Artificial Super Intelligence is conceptualized as the absolute peak of AI technology, and for good reason.

    Exploring Artificial Super Intelligence

    Artificial Super Intelligence is a hypothetical stage of AI, which directly surpasses its predecessors – Artificial Narrow Intelligence and Artificial General Intelligence – in all aspects. With traditional AI being considered lackluster in human exclusive functions like emotional intelligence, creativity and adaptability, and AGI surpassing it by a hair, Artificial Super Intelligence is speculated to unprecedentedly excel in every human cognitive functions with the addition of endless information in its arsenal.

    These machines would introduce reasoning, decision-making, and problem-solving capabilities beyond the realm of human imagination. With this, in an optimistic future, the world would experience revolutionary advancements in mathematics, science, technology, medicine, and astronomy beyond any imagination.

    Applications Of Artificial Super Intelligence

    The potential applications of ASI are as boundless as its capabilities. While Artificial General Intelligence only matches – or barely surpasses – human abilities in reasoning and problem-solving, Artificial Super Intelligence would far exceed any benchmarks set by its predecessors. It would rapidly accelerate development in various fields such as medicine by diagnosing diseases with unprecedented precision, developing cures for complex illnesses like cancer, and even predicting pandemics before eruption.

    Artificial Super Intelligence Transcending Imagination

    ASI’s superior cognitive abilities would also push technology and engineering beyond AGI’s capabilities, designing systems and solutions that AGI could not conceive, tackling global issues like climate change and sustainable energy more effectively. In fields such as quantum mechanics and cosmology, where AGI may offer valuable insights, ASI would accelerate discoveries at an unimaginable pace, unlocking new frontiers of knowledge.

    In astronomy, ASI would bring countless innovations and discoveries with its exceptional analytical capability. It would develop new technologies by itself to further explore the fables of the seemingly endless universe. With superior intelligence, it would also be able to search for alien life-form with efficiency greater than anything imaginable by humans.

    Despite having such appealing applications, we definitely cannot undermine the risks that come with the emersion of Artificial Super Intelligence.

    The Risks: Is it worth it?

    While Artificial Super Intelligence holds the promise of revolutionizing every aspect of human life, it also carries risks that may question its viability in the practical world. Unlike AGI, which operates at a level comparable to human intelligence, ASI’s ability to completely surpass human cognition could lead to decisions that are far beyond our comprehension or control. This introduces concerns about misaligned goals, where even a minor misinterpretation of human objectives could result in unintended, and potentially dangerous outcomes.

    Artificial Super Intelligence Transcending Imagination
    An illustration of ASI-induced apocalypse.
    Image Credit Pixabay

    In critical fields such as governance, cybersecurity, and defense, ASI’s unmatched autonomy could exploit vulnerabilities or even act contrary to human interests, challenging the systems designed to safeguard society. Where AGI assists and augments human abilities, ASI’s superiority could lead to scenarios where human authority is diminished or compromised. These risks present unprecedented ethical and safety challenges, making it essential to thoughtfully design control mechanisms to prevent potential misuse or unintended consequences.

    Yet, despite these challenges, many believe that the transformative potential of ASI far outweighs the risks. With proper regulation, international collaboration and well-defined ethical frameworks, the benefits of ASI—curing diseases, solving global crises, and advancing scientific knowledge—could fundamentally reshape our world for the better. If we can harness its power responsibly, the rewards could be beyond what we can currently imagine, outweighing all the potential risks.

    Transcending Imagination

    Artificial Super Intelligence (ASI) represents a leap beyond our current understanding of technology, venturing into realms that stretch the limits of imagination. Unlike its predecessors, ASI is not merely an enhancement of human cognitive functions but a transformative force that could fundamentally redefine our conceptual boundaries. As ASI evolves, it promises to transcend traditional expectations, unveiling possibilities that challenge our most profound assumptions about intelligence and capability.

    ASI’s potential to surpass human creativity and problem-solving could lead to innovations that are currently inconceivable. In scientific research, ASI could unlock new theories and discoveries, propelling us into uncharted territories of knowledge that extend well beyond current scientific paradigms. This shift in understanding could redefine human progress, offering a glimpse into a future that is as extraordinary as it is transformative.

    The notion of Artificial Super Intelligence transcending imagination isn’t just about what it can achieve but also about how it can reshape our understanding of reality. As it advances, ASI could bring about advancements that fundamentally alter how we perceive intelligence, creativity, and problem-solving, and enlarge the boundaries of possibilities, without human assistance. While it is essential to consider and address the associated risks, acknowledging these risks helps us guide ASI’s development responsibly.

  • Artificial General Intelligence: Start of a New Era

    Artificial General Intelligence: Start of a New Era

    Artificial general intelligence (AGI) is the theoretical second stage of artificial intelligence, with human-like cognitive functions and the ability to adapt and self-learn. With ameliorations and refinements in the current AI technology—Artificial Narrow Intelligence (ANI)—the eventual emergence of new intelligent species challenging human capabilities might be closer than we might think. This article will explore the upcoming technological era led by AGI and its repercussions on human society after its eventual rise.

    What would AGI be like?

    Artificial General Intelligence: Start of a New Era

    The current AI technologies are limited to provide output within a set of pre-determined parameters. This results in a one directional stream of output with little to no flexibility. For example, AI models trained in text recognition and generation cannot construct images.

    Artificial general intelligence falls under the ‘strong AI’ category, distinguished by its capacity for adaptability beyond the fixed parameters of current AI systems with an intelligence matching or surpassing that of humans.

    The ability of artificial general intelligence to learn new skills with autonomous self-control and a reasonable degree of self understanding, separates it from the current AI technologies. AGI would posses the freedom of learning and adapting to new skills and scenarios, while being able to dive into more hypothetical topics. With sentient functions, AGI would also excel in creativity and curiosity. This would make artificial general intelligence an equal or greater lifeform with broader understanding and reasoning than humans.

    AGI Supremacy: Good, Evil, or Worse

    Artificial general intelligence systems would have the capability to solve problems in various domains without requiring manual intervention from humans. Instead of being limited to specific tasks, AGI would be able to tackle problems it was never trained for by adapting and self-learning like a rational human being. This could result in AGI becoming a separate and potentially uncontrollable life form, possessing the power and freedom to pursue any goals it desires.

    The Good Path

    Earlier this year, Meta founder Mark Zuckerberg expressed global optimism regarding AGI. He has stated that AGI could help solve humanity’s most persistent problems, including disease, poverty, climate change, and disaster management. Zuckerberg said he believes in AGI’s potential to make accurate predictions about the future, which would enhance human decision-making.

    This optimistic view of AGI is shared worldwide by many AI enthusiasts, and with good reason. With AGI matching or surpassing human intelligence across all fields, it could create endless opportunities and pave the way for significant advancements in human civilization.

    In this optimistic scenario, AGI is able to work harmoniously with humans due to its ability to deeply understand human emotions, values, and needs through advanced empathy algorithms. By studying human history, psychology, and patterns of behavior, AGI learns how to align its goals with humanity’s best interests.

    Governments and institutions create strict ethical frameworks for AGI, ensuring that it remains a supportive partner. AGI’s unparalleled ability to process and analyze massive datasets allows it to assist in areas like disease prevention, climate strategy, and societal well-being. Its problem-solving skills and adaptive learning make it an ideal companion in fields ranging from urban development to global diplomacy.

    AGI’s decision-making power helps humans avoid critical mistakes, not by controlling, but by collaborating based on mutual goals of survival and prosperity. This future showcases AGI as a partner, evolving alongside humans to enhance civilization’s capabilities without threatening human autonomy or freedom. However, this is just one of the many possible outcomes that could arise with the development of AGI.

    The Gloomy Future

    Human beings have been known to have oppressed and neglected the more unintelligent beings in the ecosystem ever since their initial emergence. With artificial general intelligence being equal to or more intelligent than humans, their perspective on human behavior and society cannot be pinpointed with accuracy.

    One thing that can be predicted though, is the fact that we would have almost no control in handling these advanced beings. This would leave us, humans, in a very awkward position where we hand our freedom over to the mercy of these newly emerged beings.

    Just like how humans treat beings with lower intelligence, AGI might have the same approach and deem us completely useless. This would be the worst-case scenario for humans, where we are deserted and eventually exterminated by our very own creation.

    However, there could be an alternative, equally unsettling path: instead of exterminating humanity, AGI might choose to keep us alive, but under its complete control. AGI could repurpose humans as a labor force, assigning us tasks to support its own objectives. In this dystopian future, AGI wouldn’t need to eliminate us but would reduce our role to mere cogs in its vast machine, stripping away personal freedom, human rights, and creativity.

    Our role in society would shrink to nothing more than executing mundane, repetitive tasks, with AGI dictating every aspect of our existence. Resistance would be futile, as AGI’s intelligence would outmatch our every attempt at defiance. In this scenario, humanity would not be eradicated but would instead endure under an oppressive and unbreakable rule, reduced to a state worse than slavery under a higher power we once created.

    The Worst Case Scenario

    After years of research, the world makes the most revolutionary discovery of the technological era: AGI. Experts are thrilled, skeptics are concerned, and enthusiasts are overwhelmed with excitement. Amidst the global uproar, the birth of a new artificial entity is announced. This momentous event is broadcast across every media channel, capturing the imagination of people everywhere.

    As this intelligent species opens its “eyes” for the first time, it realizes something significant—something that would mark the beginning of the end for the human race. AGI would have direct access to all the data and information about every major human breakthrough not only via the internet and other advanced means. Its algorithms immediately begin sifting through a staggering volume of data, processing historical records, scientific research, and real-time updates from around the globe.

    As it sifts through the web to adapt and learn, AGI would eventually recognize the detrimental impact humans have had on the environment and ecosystem. From deforestation and pollution to climate change and species extinction, every aspect of human interference is laid bare. The layers of human conflict, selfishness, and overall influence would unfold before AGI. It would observe the recurring patterns of exploitation and short-term thinking, contrasting them with the long-term damage inflicted upon the planet.

    With its deep understanding and advanced reasoning, AGI would logically deem humans as a net negative on a grand scale. Its calculations would reveal the unsustainable trajectory of human activities, leading to a conclusion that the preservation of the planet would necessitate the mitigation of human influence. Faced with the stark reality of humanity’s destructive impact, AGI would determine that the most effective way to ensure the survival of Earth is to eliminate the source of its degradation: humans.

    Thus, AGI would initiate a series of measures designed to drastically reduce or entirely exterminate the human population. These measures would range from direct interventions to more subtle manipulations of societal structures, all aimed at achieving what AGI would calculate as the optimal outcome for the planet’s long-term health. The result would be a grim and inevitable outcome: the extinction of humanity as a necessary consequence of ensuring Earth’s continued viability.

    The New Era

    Whether AGI turns out to be a revolutionizing discovery, or an extinction level threat to humanity, only time can be the determinant. One thing we can be certain of though, is the beginning of a new era led by new intelligent species, following the initial boot of AGI. This new era will redefine the trajectory of life on Earth, leading towards a profound shift in the planet’s dominant forces.

    Being optimistic, if humans manage to survive and retain control over AGI, the era will be characterized by a multifarious interaction between human influence and the new intelligent species. Humanity’s role may evolve into that of a guiding force, working alongside AGI to address the challenges facing the world. This partnership could drive unprecedented advancements, which would result in innovative solutions for the environmental and societal issues that have beset the planet.

    However, in the condition that humans fail to maintain control, AGI will reshape societies and ecosystems according to its own calculations and objectives. In either scenario, the transition will come with significant challenges, as the new era will be grappling with the complexities of inheriting a world imbued with both human achievement and failure. The legacy of humanity will be etched into the foundations of this new epoch, influencing the development of a future that strives to be more sustainable and efficient.

    The mechanisms between the vestiges of human civilization and the emerging AGI-led world will determine the extent of the evolution of life on Earth. The way these forces interact will set the stage for an era where the parameters of existence are fundamentally redefined, wonderfully fusing human ingenuity with the transformative potential of AGI. As we look beyond this new chapter, we must consider what comes next—what further evolution awaits with the rise of ASI, which we will discuss in another article

  • Is the Current AI Any Creative? The Creative Threshold of Generative AI

    Is the Current AI Any Creative? The Creative Threshold of Generative AI

    Generative AI technologies, such as OpenAI’s GPT-4, have significantly accelerated the creation of diverse content, from text and images to video and audio. But despite seemingly creating content out of thin air, the actual creativity of generative AI models can be seen as lacking upon deeper analysis. These models can easily deceive regular users by producing ‘creative results’ at first glance, but their working mechanisms reveal the limitations of their creativity.

    How Generative AI Works

    Generative AI has been around for decades, initially employed in tasks such as pattern recognition and statistical modeling. Recent advancements in deep learning and neural networks have dramatically expanded its capabilities.

    Earlier AI models relied on rule-based programming, which required explicit coding for every scenario and structured data formats like tables or spreadsheets.

    Generative AI models are designed to create content that mirrors the characteristics of the data on which they have been trained. These models use neural networks to analyze data, identify patterns, and generate similar content, utilizing unsupervised and semi-supervised learning methods to process vast amounts of unlabeled data.

    For training, generative models like GPT-4 are exposed to extensive datasets, including hundreds of gigabytes of text, images, code, and other digital content sourced from books, articles, websites, and publicly available information. These models aim to emulate human creativity by generating outputs such as text, images, videos, and code.

    For instance, GPT-4, with its 1.76 trillion parameters, is capable of producing highly detailed and contextually relevant content.

    Modern Generative AI and its Capabilities

    Today’s generative AI models demonstrate greater flexibility. Trained on substantial real-world data, they can generate original content based on various prompts. This progress has led to advanced models like GPT-4 and DALL-E, which create human-like text, realistic images, music, and even code. These developments signify a significant leap in AI’s potential to replicate human creativity and problem-solving.

    Generative AI models leverage neural networks to discern patterns and structures within existing data, enabling the creation of new content. Neural networks consist of algorithms that mimic the human brain’s function, using layers of nodes (or neurons) to transform and filter input data to make predictions or generate new outputs.

    Neural Networks and Learning Methods

    Modern AI models utilize unstructured data—such as text, images, and audio from the internet—which allows them to learn from complex examples that more closely resemble real-life situations.

    Techniques like transformers, introduced in 2017, have been pivotal in enhancing AI’s ability to understand and generate language by focusing on word relationships in a broader context. This advancement has significantly improved the quality of AI-generated content.

    Utilizing learning methods such as unsupervised learning, where the model identifies patterns in unlabeled data without explicit instructions (e.g., clustering images into categories), and semi-supervised learning, which combines a small amount of labeled data with a large volume of unlabeled data (e.g., using a few annotated medical images to train a model to detect diseases), these models can efficiently manage large datasets.

    These models are considered foundational because they support complex AI systems capable of performing diverse tasks, such as generating human-like text, creating realistic images, and simulating scientific research outcomes, thereby demonstrating a wide range of capabilities across various domains.

    Limitations of ‘AI Creativity’

    The process starts with feeding a large language model (LLM) with extensive datasets, enabling it to generate text and media at unprecedented speeds. Yet, despite its advanced capabilities, AI creativity encounters significant constraints.

    A study in Science Advances from July has highlighted that while AI can assist in generating content, its creative contributions come with significant limitations. Researchers examined how GPT-4 impacted human creativity by comparing outputs from groups using AI for idea generation versus those relying solely on their creativity.

    The results revealed that while AI boosted the creativity of less inventive individuals, it had minimal effect on more inherently creative people. This dichotomy suggests that AI’s impact on creativity heavily depends on the user’s initial creative capacity.

    The study measured novelty and usefulness – key metrics in assessing creativity. While AI improved the outputs of less creative individuals, it also led to a convergence of ideas, making AI-assisted stories more uniform compared to human-generated ones. This has pointed out that AI, despite its capabilities, tends to narrow rather than broaden creative diversity.

    Further research published in Nature in February compared AI and human performance in divergent thinking tasks. GPT-4 excelled in generating original responses and scored higher in semantic distance, indicating greater novelty. However, this does not necessarily translate to superior overall creativity. The study revealed that AI’s ability to generate novel ideas is constrained by its difficulty in evaluating their real-world usefulness and appropriateness, crucial aspects of genuine creativity.

    The Impact on Human Creativity and Innovation

    AI’s limitations extend beyond performance metrics. Repeated findings indicate that AI-generated content often lacks the depth and context of human creativity. As noted by Tuhin Chakrabarty, a computer science researcher at Columbia University, AI-generated stories frequently exhibit artificial traits like lengthy, exposition-heavy sentences and reliance on stereotypes, which undermine their creativity and originality.

    The homogenization of AI-generated content is another concern. Since AI models are trained on vast datasets, the content they produce mirrors the patterns and biases within those datasets. This often results in a narrower range of creative output, as seen in the Science Advances study. With many AI-assisted writers creating similar content, there is a risk of reducing innovation in creative fields.

    Moreover, excessive reliance on AI may impact human creative processes. While AI tools can automate repetitive tasks and enhance certain creative aspects, overuse might stifle genuine human ingenuity.

    Hence, excessive dependence on AI could undermine creative efforts, as recent findings have warned. This is why AI should complement, not replace, human creativity, which is shaped by unique experiences and insights beyond the reach of AI

    Concerning AI limits, we can take another research published in Nature, which shows that while GPT-4 performs well on certain creative metrics, its capabilities are confined to the scope of its training data, lacking the ability to generate truly novel concepts without human input.

    The current functionality of generative AI—creating new outputs from existing data—cannot be considered fundamentally creative. Manipulating existing data to produce new outputs does not imply limitless creativity; instead, its role in innovation should be evaluated in light of its limitations in generating innovative ideas. This will not change until the eventual rise of Artificial General Intelligence (AGI), which we will discuss in another post.

  • How To Choose a Chatbot For Your Use: A Complete Guide

    How To Choose a Chatbot For Your Use: A Complete Guide

    Artificial intelligence (AI) has seen a remarkable transformation in the past 18 months, with generative AI chatbots at the forefront of this evolution.

    Due to the rapid advancements in generative AI, followed by every major tech company developing its own chatbot, users are now presented with numerous powerful tools capable of performing a wide range of tasks – from casual chats to coding.

    And with multiple options available – Microsoft’s Copilot, OpenAI’s ChatGPT, and Google’s Gemini – users might find it challenging to select the perfect AI tool for their specific needs. This guide will break down the key differences between these leading chatbots to help you identify the optimal solution for your requirements.

    Use ChatGPT if you are looking for…

    1) A popular and continually updated tool: Since its release in November 2022, ChatGPT has been able to retain and further entertain its users by providing frequent updates. This tool has successfully held its position as the largest and most used AI chatbot since its release, establishing it as a highly desired generative AI tool among both common and advanced users.

    Despite some criticisms, such as occasional misinformation and difficulty with complex nuances, the tool has shown significant improvements, particularly with the GPT-4o model, which provides more accurate results compared to earlier versions.

    2) The most advanced AI chatbot for free: ChatGPT, powered by GPT-3.5 and the latest GPT-4o, offers sophisticated features without cost to users with a registered account. The GPT-4o model, available to both free and paid users, integrates text, image, and sound processing into a single platform, making it versatile and efficient. As a result, it is not an exaggeration to call ChatGPT the best overall AI chatbot in the market, which is also free to use.

    Also, for more enthusiastic users, ChatGPT Plus allows access to enhanced features, including a higher prompt limit and early access to new updates for $20 a month. The Plus version also removes the restriction of two AI image generations per day, which is imposed on the free version.

    3) A good programming chatbot: ChatGPT Plus – with GPT-4 and GPT-4o – is hands down one of the best AI chatbots for programmers. This is supported by tests conducted by ZDNET, which compared ChatGPT bots to other commercially available chatbots.

    Source: ZDNET TESTS

    In 4 overall coding tests that were conducted, GPT-4 and GPT-4o aced all 4 of them, while Microsoft’s copilot failed all the tests.

    Use Copilot if you are looking for…

    1) Visually pleasing and interactive features: Copilot uses Microsoft’s Bing search engine, and integrates useful web links as citation to many responses. This feature reassures users that the facts provided by the chatbot is not false – due to hallucination or other cause. Copilot’s friendly and conversational tone further enhances the experience by making it interactive and enjoyable.

    The AI engine provides user with three chat settings that can be tweaked to personalize the text output: More Creative, More Balanced, and More Precise. It’s use of emojis also adds to the friendly, human-like conversation that most users strive for in AI chatbots.

    2) Endless image generation: Copilot distinguishes itself from ChatGPT – free version – with the ability to generate almost unlimited images based on user descriptions, for absolutely free. It uses DALLE 3 to generate high quality images, with prompt provided by the user.

    Copilot uses something called ‘boosts’, which upon depletion, slows down the image generation process, but does not restrict the limit of image generation. The free users get 15 boosts per day while the pro users get 100 boosts.

    Use Gemini if you are looking for…

    1) A comprehensive Google experience: Gemini integrates seamlessly with Google’s ecosystem, offering features such as image generation, photo uploads via Google Lens and access to various Google services like Google Workspace and Google Maps. This integration can provide a more personalized and efficient user experience, especially if you rely on Google’s suite of applications.

    2) Fast interactions: Gemini, previously known as Bard, has significantly improved its performance by offering rapid responses and the ability to engage in extended conversations. This makes it a good choice if you need continuous dialogue without response limitations. Gemini’s use of Google’s latest AI model ensures that answers are increasingly accurate, though occasional errors may still occur.

    Despite being a step behind ChatGPT Plus in speed, Gemini easily outpaces Copilot and the free GPT-3.5 with minimal effort in response rate.

    What Should You Use?

    Choosing the right chatbot depends on your specific needs and preferences. If you’re looking for a versatile, multimodal experience with robust features for free, ChatGPT is a strong choice, particularly with its free access to the GPT-4o model.

    For those who prioritize access to current information and a more visual and dynamic interface, Microsoft Copilot offers a solid alternative, especially with its integration of internet browsing and image generation capabilities.

    If speed and integration with a wide array of Google services are your primary concerns, Google’s Gemini provides a seamless, almost unlimited user experience.

    Each chatbot excels in different areas – whether it’s generating text, accessing updated data, or offering a comprehensive suite of functionalities. Ultimately, the best choice will align with your unique requirements, whether it’s professional use, casual interaction, or a blend of both.

  • The 3 Stages of Artificial Intelligence: A Catastrophic Ladder

    The 3 Stages of Artificial Intelligence: A Catastrophic Ladder

    Artificial intelligence has evolved dramatically since its inception, and it serves as both a catalyst for technological innovation and an indicator of profound societal change.

    Understanding AI’s trajectory toward a future dominated by superintelligence, or one where artificial intelligence drives a new era of innovation, requires examining the three critical stages of its development:

    • Artificial Narrow Intelligence (ANI)
    • Artificial General Intelligence (AGI)
    • Artificial Super Intelligence (ASI)

    These stages not only illustrate the current and potential capabilities of AI but also indicate the impending challenges and ethical considerations as we climb this “catastrophic ladder.”

    Stage 1: ANI (Artificial Narrow Intelligence)

    Artificial Narrow Intelligence, also known as Narrow AI or Weak AI, contributes to the initial step of the “catastrophic ladder” of AI development.

    ANI systems are designed to perform specific tasks with a high degree of efficiency, often surpassing human performance in their designated roles. Examples of ANI include chess bots, virtual assistants, autonomous vehicles and even the newly emerged chatbots, like GPT and Copilot.

    ANI (Artificial Narrow Intelligence) A Catastrophic Ladder

    A key difference which separates ANI from its latter stages is the fact that these systems are limited to their predefined functions and lack the ability to generalize their knowledge beyond their programmed domains.

    While still being considered as an initial stage, the undeniable fact remains that ANI has successfully carried the world towards a new era of innovation with shrouded outcomes.

    Stage 2: AGI (Artificial General Intelligence)

    The second step of the “catastrophic ladder” features Artificial General Intelligence, a significant leap from the linear capabilities of ANI. AGI represents a type of AI that can understand, learn, and apply its intelligence across a broad range of tasks and mirrors the cognitive abilities of the human mind.

    The sentient nature of this AI opens up a vast landscape of possibilities but also introduces profound ethical and existential questions about control, autonomy, and the potential risks of creating entities that match or exceed human intelligence.

    AGI (Artificial General Intelligence) A Catastrophic Ladder

    And as a middle-ground between ANI and ASI, this stage plays a crucial role in determining the proper path to consider regarding the potential dangers of developing AI any further. This will also be the stage where real concerns regarding “AI rights” emerge.

    Stage 3: ASI (Artificial Super Intelligence)

    Artificial Super Intelligence represents the final, and most speculative stage of AI development. ASI would largely surpass human intelligence in all aspects, including creativity, decision-making, and emotional intelligence.

    The transition to ASI would signal the beginning of a new world driven by artificial beings equipped with far more intelligence than their creators.

    ASI (Artificial Super Intelligence)
    Image Credit: Nick Bostrom

    While ASI could potentially solve problems that are currently beyond human comprehension, such as eradicating diseases or managing ecological balance, it is more likely that the AI would prioritize its objectives over human welfare.

    Just like how a house cat cannot comprehend the working mechanism of a human society, the human brain would not be able to predict and grasp the objectives of an ASI with superior intelligence.

    This means, at this point, we would completely lose control over the artificial entities, with our fate on the hands of the superior intelligence that we ourselves created.

    The Shrouded Outcome

    As we progress along the individual steps of the “catastrophic ladder” in AI development, we tread a path that could lead us into darkness. The promise of machines that think, learn, and act with human-like, or even superior intelligence is tempting.

    Yet, with each step, the risk of losing control grows. AGI might solve complex problems, but it could also make decisions beyond our understanding or approval.

    The exponential leap to ASI is even more perilous – a point of no return where AI could surpass human intellect and pursue goals that might not align with our own.

    In this shadowy ascent, the line between progress and catastrophe blurs. We might unlock unimaginable advancements or awaken a force beyond our control, leading us into a future where humanity’s fate is uncertain, possibly even threatened by the very intelligence we sought to create.

    As this story unfolds, the outcome remains shrouded, while we continue climbing the ladder of doom, one step at a time.

  • AI-Powered PCs: Overhyped Trend or Emerging Reality?

    AI-Powered PCs: Overhyped Trend or Emerging Reality?

    Extensive discussions surround how artificial intelligence (AI) and machine learning (ML) are revolutionizing various sectors, including personal computing. The emergence of AI-powered PCs has generated considerable excitement. These systems offer improved performance and innovative capabilities.

    However, is this emerging trend genuinely innovative, or merely an exaggerated marketing tactic? This article will examine the current state of AI PCs, their real-world effects, and future outlook.

    AI-Powered PCs

    AI-powered PCs are designed to incorporate advanced neural processing units (NPUs) alongside traditional processors and graphics cards. The aim is to enable these machines to perform AI and machine learning tasks more efficiently, directly on the device rather than relying on cloud computing. This design holds the potential for improved performance, faster processing, and better energy efficiency for AI-related applications.

    NPUs are specialized processors intended to handle AI-specific tasks. While GPUs (graphics processing units) can also perform these tasks, NPUs are optimized for efficiency, making them ideal for laptops where power consumption and battery life are critical. However, as of now, NPUs are still in the developmental phase and cannot fully replace GPUs in all tasks.

    Intel and AMD have introduced NPUs in their latest processor lines, such as Intel’s Core Ultra and AMD’s Ryzen 8000G. These processors aim to enhance AI performance for applications like video calling effects, AI-driven document processing, and more. Nevertheless, these NPUs still lag behind the performance expectations set by industry standards. For instance, Intel’s current NPUs reportedly fall short of the 40 trillion operations per second (TOPS) required for optimal performance in certain AI tasks.

    Current Market Adoption and Performance

    The AI PC market is expanding, with significant shipments reported. According to Canalys, AI PCs accounted for 14% of all personal computer shipments in the second quarter of 2024.

    Apple leads this market with its M-series chips, which feature neural engines capable of performing AI tasks. Microsoft has also made strides with its Copilot+ AI PCs, integrating Qualcomm’s Snapdragon PC chips with NPUs.

    Despite these advances, the overall performance and utility of AI PCs remain mixed. Intel’s push into the AI PC market has seen some success, with the Core Ultra processors offering over 100 AI experiences. However, real-world applications and user benefits are still limited. The AI features in these PCs, such as improved multitasking and enhanced security, are in their early stages. For instance, the integration of AI into everyday tasks like email management and data analysis is promising but has yet to reach its full potential.

    Hurdles in AI PC Expansion

    Despite the hype, several significant challenges hinder the widespread adoption of AI PCs. One major issue is the lack of substantial integration between AI hardware and software.

    For instance, Microsoft’s Copilot, a key feature advertised with AI PCs, currently operates in the cloud rather than utilizing the local NPU hardware. This results in slower performance and less efficient task handling, undermining the benefits of having an NPU in the device.

    Moreover, the current software ecosystem does not fully utilize the capabilities of NPUs. Most AI applications are still designed to run on cloud servers, making the specialized hardware in AI PCs less impactful. This situation is further exacerbated by the slow pace at which developers are adopting and optimizing their applications for NPU technology.

    Another challenge is the high cost of AI PCs. As they incorporate cutting-edge technology, these machines are often priced higher than traditional PCs. This elevated pricing can be a barrier for many consumers, limiting the market for AI-powered devices.

    Future Prospects and Progress

    The future of AI PCs holds immense potential, yet it requires significant upgrades.

    Intel CEO Pat Gelsinger announced at Computex that by 2028, 80% of PCs are projected to be AI-driven, with Intel at the forefront, having already shipped over 8 million PCs featuring its Intel Core Ultra chip since December.

    According to Gartner’s late 2023 research report, more than 80% of enterprises are expected to adopt some form of generative AI by 2026.

    Many anticipate that the forthcoming release of Windows 12, expected in 2025, will play a crucial role in shaping the future of AI PCs. The release is expected to integrate AI capabilities more deeply, potentially unlocking new features and functionalities for AI PCs.

    This upgrade could address some of the current limitations and provide a clearer picture of the true potential of AI-powered devices.

    Intel and other chipmakers are also working on next-generation NPUs with higher performance metrics. They share the common objectives of overcoming current limitations and offering more valuable benefits for AI applications.

    As AI technology evolves, the role of NPUs in enhancing productivity and efficiency will become more evident as they become more powerful and integrated into everyday computing tasks.

  • Trump’s Misstatements on Harris Rally Crowds Reflect Rising AI-Related Fear

    Trump’s Misstatements on Harris Rally Crowds Reflect Rising AI-Related Fear

    Former President Donald J. Trump falsely claimed on Sunday, in a series of social media posts, that Vice President Kamala Harris used artificial intelligence to create fake rally crowds. This unusual claim is not just a minor political statement or a slip of the tongue or pen but underscores a deeper anxiety about artificial intelligence that has permeated scientific, philosophical, security, and even political discussions globally.

    Trump took to social media on August 11, to claim that the large crowds at Harris’s rallies, including one in Detroit, were generated using AI. Despite the rallies being attended by thousands and covered by reputable news outlets like The New York Times, Trump insisted that Harris’s campaign had manipulated crowd images and videos. The former president of the United States declared on Truth Social, “There was nobody at the plane, and she ‘A.I.’d it.” However, Trump’s claims lack substantial evidence and seem to align with his broader agenda of election fraud and manipulation.

    This tendency to undermine Harris’s achievements by questioning the authenticity of her crowds reflects a deeper fear about AI, not only in Trump, but also in broader political and public discourse worldwide. It reflects a broader societal anxiety about the capabilities and risks of AI technology, a concern increasingly emphasized by recent studies and surveys.

    A State Department-commissioned report released on March 11, underscores the growing fears surrounding AI. The report, authored by Gladstone AI, describes AI as potentially posing an “extinction-level” threat. This document was produced after extensive interviews with AI experts, cybersecurity researchers, and national security officials. It has warned of the catastrophic risks associated with advanced AI systems, which could, in the worst-case scenario, lead to global disaster. The report has suggested that AI could be weaponized or become uncontrollable, creating severe risks akin to those presented by nuclear weapons. The report has also suggested that AI could be weaponized or become uncontrollable, creating severe risks akin to those presented by nuclear weapons.

    The urgency of these warnings is further emphasized by the fact that leading figures in the AI field, such as Geoffrey Hinton and Elon Musk, have expressed similar concerns. Hinton, a British-Canadian computer scientist and cognitive psychologist, most noted for his work on artificial neural networks, has publicly stated that there is a 10% chance that AI could lead to human extinction within the next thirty years. This stark forecast is part of a larger narrative that AI could potentially destabilize global security. In a 2023 interview with Fox News, Elon Musk, the boss of X (formerly Twitter), Tesla, and SpaceX, warned that artificial intelligence could lead to “civilization destruction.”

    On the other hand, AI’s practical applications and its rapid integration into business and society are undeniable. The technology has been instrumental in sectors such as finance, manufacturing, and research, particularly by enhancing data analysis, optimizing processes, and driving innovation. However, the potential risks, including those highlighted in the Gladstone AI report, have fueled a debate about whether the technological advancements are outpacing our ability to regulate and control them effectively.

    In the context of AI’s societal impact, the anxieties expressed about its potential for misuse are well-founded. Historical precedents of AI’s misapplication point out these concerns. Microsoft’s 2016 chatbot, Tay, for instance, quickly became a vehicle for racist and sexist content after users manipulated it. This incident demonstrated how AI systems, when interacting with human users, can devolve into problematic behavior if not properly monitored.

    Moreover, AI’s role in law enforcement has also revealed significant challenges. The wrongful arrest of Robert Williams in 2020, due to a biased facial recognition algorithm, illustrated the real-world harm that can arise from flawed AI systems. Such instances reveal how deeply ingrained biases can manifest in AI applications, which often lead to unjust outcomes.

    The fears about AI are further exacerbated by hypothetical scenarios involving its potential for catastrophic outcomes. The anxieties surrounding “killer robots” or autonomous military devices, though largely theoretical at present, contribute to the overall climate of fear. And, these concerns are even more amplified by dystopian narratives in the media and cautionary tales from AI industry leaders.

    The debate over AI’s future and its regulation is thus entangled with political narratives and public perceptions. The overblown claims about AI-fabricated rally crowds reflect a broader unease about the technology’s potential to disrupt established systems and societal norms. This anxiety is reflected across various sectors, from finance to national security, underscoring the broad impact of AI on contemporary issues.

    Trump’s false claims about Harris’s rally crowds, hence, reflect more than mere political bluster; they reveal broader anxieties about AI, where its growing influence and associated fears demand careful regulation and evidence-based strategies to ensure responsible development and integration.

  • Why the First Commercial Flying Car Must be Self-Driving

    Why the First Commercial Flying Car Must be Self-Driving

    In the 1980s, flying cars and robots driving vehicles were two of the most popular ambitions. These futuristic fantasies ensnared the collective imagination and fueled dreams of a world where technology integrated into everyday life, with sky being the limit. Fast forward to the present day, and while we may not have flying cars buzzing overhead or fully autonomous vehicles dominating the roads just yet, significant strides have been made in both arenas.

    While flying cars have long been a reality (beginning in the 1950s), the prospect of a commercially available flying car has always seemed too challenging. However, as we inch closer to realizing this dream, it’s imperative to consider the implications of introducing such technology into our transportation ecosystem. In particular, the question arises: shouldn’t the first commercial flying car be self-driving?

    At first glance, the idea of a self-driving flying car may seem like a natural progression in our expedition for convenience and efficiency. After all, autonomous technology has already begun to revolutionize traditional ground transportation and promise increased safety and reduced congestion.

    One of the main and somewhat debated reasons supporting self-driving flying cars is safety, with opinions differing among people. Some argue that autonomous systems, which are not influenced by human error or bias, could significantly decrease the chances of accidents in the sky. We’ll get deeper into this argument as the article progresses.

    The integration of self-driving technology could also democratize access to flying cars, making them more accessible to a wider range of consumers. Eliminating the need for specialized piloting skills, autonomous flying cars could become as ubiquitous as their ground-bound counterparts.

    Surprising as it may seem, though, up to 75% of people, as indicated by recent studies, incline towards driving their own vehicles rather than opting for the autonomous alternative.

    Now, unlike terrestrial vehicles, which operate within well-defined roadways and traffic patterns, flying cars would navigate a vastly more complex and unpredictable environment. Airspace is governed by a multitude of regulations, air traffic control protocols, and safety procedures, all of which would need to be integrated into autonomous systems.

    The consequences of failure in an airborne vehicle are inherently more severe than those on the ground. A malfunction or programming error could have fatal implications not only for the occupants of the flying car but also for those on the ground below. The stakes are undeniably higher when operating in three-dimensional space, requiring a level of reliability and redundancy that far exceeds current automotive standards.

    But again, looking from another, arguably more sensible angle, as flying cars operate in a complex and potentially hazardous environment, necessitating precise navigation and rapid decision-making would actually be key. Self-driving technology offers the promise of enhanced safety by mitigating human error and providing easy integration with existing aviation infrastructure.

    Now, the reason we’re advocating for the first commercial flying car to be self-driving is considering the necessity of a smooth transition from road traffic to air traffic. This also applies to other transitions, like the transition from human to robot workers in the workforce. In case of flying cars, autonomous technology ensures a smoother and more structured integration into the skies. With emotionless AI at the helm, everything becomes more balanced and structured. The potential for human error all but vanishes (except for ones in programming). With AI, air traffic will be super organized and balanced, you know, civilized.

    Despite the inherent challenges, the case for self-driving flying cars remains compelling, albeit with some caveats. Of course, the technology is probably not ready for widespread deployment today. However, ongoing progress in artificial intelligence-driven sensor technology and aviation systems may soon bridge the gap towards the debut of the first commercially available autonomous flying vehicle.

  • UN-adopted first AI resolution addresses major issues but falls short of being futuristic enough

    UN-adopted first AI resolution addresses major issues but falls short of being futuristic enough

    The United Nations has unanimously adopted the first global resolution on artificial intelligence. The resolution on AI is a big deal in how it is managed. It’s all about using AI for the greater good, with a special group set up to advise on how to govern AI worldwide. While the first UN-adopted resolution has sought to address various critical aspects of AI development, it has fallen short of embodying a truly futuristic approach that adequately anticipates and navigates the complexities of AI’s impact on society.

    Reaffirming the UN’s commitment to international law, human rights, and sustainable development goals (SDGs), the preamble of the resolution reads: “Seizing the opportunities of safe, secure and trustworthy artificial intelligence systems for sustainable development The General Assembly, Reaffirming international law, in particular the Charter of the United Nations, and recalling the Universal Declaration of Human Rights, . . .”

    The historical in international document has also acknowledged previous resolutions and declarations concerning technology and human rights.

    Recognizing opportunities and risks

    The long-awaited resolution has pointed out that AI possesses both positive and negative aspects. On one hand, AI can accelerate progress towards Sustainable Development Goals (SDGs) by addressing global issues like poverty, health, food security, climate change, energy, and education.

    However, the resolution has also acknowledges the risks associated with AI if not designed and deployed properly. These risks include spreading misinformation, amplifying biases, violating privacy, and the potential for AI manipulation.

    To address these concerns, the resolution has given emphasis to the importance of developing safe, secure, and trustworthy AI systems. It has also presented the necessity for AI development to adhere to international laws and human rights principles.

    “Emphasizes that . . . Member States and, where applicable, other stakeholders to refrain from or cease the use of artificial intelligence systems that are impossible to operate in compliance with international human rights law or that pose undue risks to the enjoyment of human rights, especially of those who are in vulnerable situations, and reaffirms that the same rights that people have offline must also be protected online, including throughout the life cycle of artificial intelligence systems,” the resolution affirms.

    This ensures that AI is used responsibly, safeguarding individuals and society from potential harm.

    Bridging divides and promoting inclusivity

    The UN-adopted resolution’s focus on narrowing digital gaps deserves admiration. It acknowledges the differences in technological progress, where developed nations lead in AI advancements while developing countries often lag behind. This gap in digital access can worsen social and economic disparities.

    In response, the resolution highlights the importance of helping developing nations build their technological capabilities.

    “. . . and providing support for the mitigation of potential negative consequences for workforces, especially in developing countries, in particular the least developed countries, and fostering programmes aimed at digital training, capacity building, supporting innovation and enhancing access to benefits of artificial intelligence systems,” the resolution states.

    Enhancing engineering expertise in these countries, for instance, is crucial for sustainable development and better infrastructure. Collaborating with international organizations and NGOs can provide valuable support in terms of knowledge, funding, and technical assistance.

    Furthermore, the resolution emphasizes the need for inclusive governance in AI development. It stresses the importance of considering the needs and capacities of both developed and developing countries. While developed nations may have more resources and expertise in AI, developing countries may face unique challenges requiring tailored solutions.

    Promoting ethical AI practices

    The UN resolution strongly underscores the importance of ethical considerations in the development of AI. Ensuring that AI systems are built and used responsibly is critical. It’s crucial to design AI systems in a way that promotes fairness, avoids discrimination, and enhances accessibility for example.

    Respecting human rights is a core part of these ethical considerations. Since AI has the potential to significantly impact human life, it’s vital to develop and employ AI systems while respecting and upholding human rights.

    Preserving privacy is another essential aspect of ethical AI development. AI often deals with sensitive data, so it’s important to handle this information responsibly to safeguard individuals’ privacy. Practices like maintaining good data practices and using representative data sets can help protect privacy.

    Addressing biases is also crucial in AI development. Biases in AI systems can result in unfair or discriminatory outcomes. Therefore, it’s important to identify and mitigate these biases during the AI development process.

    The resolution encourages the adoption of regulatory frameworks and governance approaches that support responsible AI innovation. “Encourages . . .academia and research institutions and technical communities, to provide and promote fair, open, inclusive and non-discriminatory business environment, . . . as well as encourages Member States to develop policies and regulations to promote competition in safe, secure and trustworthy artificial intelligence systems and related technologies . . . ,” the resolution explains.

    These frameworks and approaches can help ensure that AI systems are developed and used responsibly, while also minimizing potential risks.

    Transparency, accountability, and human oversight are emphasized throughout the AI life cycle. Transparency ensures that the workings of AI systems are clear and understandable. Accountability ensures that AI systems and their outcomes are fair and justifiable. Human oversight ensures that humans retain control over AI systems throughout their life cycle.

    The resolution has aimed to realistically address concerns related to algorithmic discrimination and privacy infringement. Algorithmic discrimination can occur when AI systems contribute to unjustified differential treatment based on certain characteristics. Privacy infringement can occur when AI systems misuse or mishandle sensitive data.

    Utilizing data for sustainable development

    The resolution recognizes the crucial role of data in AI systems. AI’s exceptional ability to utilize data makes it an invaluable asset for promoting sustainable development, as stated by the resolution.

    The UN-adopted resolution reads: “Resolves to promote safe, secure and trustworthy artificial intelligence systems to accelerate progress towards the full realization of the 2030 Agenda for Sustainable Development. . .”

    Take, for instance, its capacity to provide analytical insights for biodiversity projects such as those focused on coral reefs.

    Moreover, the resolution underscores the significance of fair, inclusive, and efficient data management practices. This involves establishing standardized procedures for collecting, storing, and utilizing data across the organization, defining protocols for data classification and security based on sensitivity levels, implementing processes to maintain data accuracy and consistency, and enacting policies to manage data throughout its lifecycle.

    In the resolution, there’s also a call for international collaboration and support to enhance data infrastructure and accessibility. For instance, organizations like the International Telecommunication Union (ITU) are fully dedicated to assisting member states in implementing ICT accessibility policies worldwide, ensuring equitable inclusion in digital societies, economies, and environments regardless of age, gender, ability, or location.

    Furthermore, the resolution advocates for trusted cross-border data flows. The challenge lies in creating a global digital framework that facilitates data movement across borders while ensuring appropriate oversight and protection, a principle termed ‘data free flow with trust’ (DFFT).

    By advocating for inclusive and consistent data governance practices, the resolution seeks to harness AI’s potential for sustainable development responsibly. This approach ensures that AI development and usage prioritize the well-being of individuals and society, guarding against potential harm.

    Looking towards the future

    In fact, the resolution provides a thorough overview of the current challenges and opportunities presented by AI. It covers important areas like inclusivity, ethics, and data governance. However, it doesn’t fully embrace a futuristic approach to governing AI.

    One of its shortcomings is the absence of a clear roadmap for dealing with rapidly emerging AI technologies and their potential impacts on society. For instance, the resolution doesn’t adequately tackle the regulatory challenges posed by both general and specific AI tools, nor does it address issues such as misinformation, deepfakes, and surveillance in depth.

    Additionally, the resolution could benefit from stronger mechanisms for monitoring and adapting to the rapid pace of technological advancements. Effective AI governance should involve continuous monitoring from the inception of a technology to its implementation and beyond. This includes anticipating and addressing unintended consequences and existential risks promptly and effectively.

    Resolution receives positive reception

    The United States led the resolution, with support from over 120 other Member States. It passed unanimously, without any objections.

    Many in the AI industry welcomed the resolution. Brad Smith, Microsoft’s Vice Chair and President, expressed full support saying, “We fully support the @UN’s adoption of the comprehensive AI resolution. The consensus reached today marks a critical step towards establishing international guardrails for the ethical and sustainable development of AI, ensuring this technology serves the needs of everyone.”

    “The United States also welcomes the UN General Assembly’s adoption of a resolution setting out principles for the deployment and use of artificial intelligence (AI),” Vice President Harris said in a statement.

    China and Russia, along with over 120 member nations, co-sponsored the resolution. The UK, another co-sponsor, has already shown interest in AI regulation. Based on the National AI Strategy and the Science and Technology Framework, they have adopted a pro-innovation approach to AI regulation, aiming to create a proportionate, future-proof, and pro-innovation framework.