Category: AI concerns

  • Trump’s Misstatements on Harris Rally Crowds Reflect Rising AI-Related Fear

    Trump’s Misstatements on Harris Rally Crowds Reflect Rising AI-Related Fear

    Former President Donald J. Trump falsely claimed on Sunday, in a series of social media posts, that Vice President Kamala Harris used artificial intelligence to create fake rally crowds. This unusual claim is not just a minor political statement or a slip of the tongue or pen but underscores a deeper anxiety about artificial intelligence that has permeated scientific, philosophical, security, and even political discussions globally.

    Trump took to social media on August 11, to claim that the large crowds at Harris’s rallies, including one in Detroit, were generated using AI. Despite the rallies being attended by thousands and covered by reputable news outlets like The New York Times, Trump insisted that Harris’s campaign had manipulated crowd images and videos. The former president of the United States declared on Truth Social, “There was nobody at the plane, and she ‘A.I.’d it.” However, Trump’s claims lack substantial evidence and seem to align with his broader agenda of election fraud and manipulation.

    This tendency to undermine Harris’s achievements by questioning the authenticity of her crowds reflects a deeper fear about AI, not only in Trump, but also in broader political and public discourse worldwide. It reflects a broader societal anxiety about the capabilities and risks of AI technology, a concern increasingly emphasized by recent studies and surveys.

    A State Department-commissioned report released on March 11, underscores the growing fears surrounding AI. The report, authored by Gladstone AI, describes AI as potentially posing an “extinction-level” threat. This document was produced after extensive interviews with AI experts, cybersecurity researchers, and national security officials. It has warned of the catastrophic risks associated with advanced AI systems, which could, in the worst-case scenario, lead to global disaster. The report has suggested that AI could be weaponized or become uncontrollable, creating severe risks akin to those presented by nuclear weapons. The report has also suggested that AI could be weaponized or become uncontrollable, creating severe risks akin to those presented by nuclear weapons.

    The urgency of these warnings is further emphasized by the fact that leading figures in the AI field, such as Geoffrey Hinton and Elon Musk, have expressed similar concerns. Hinton, a British-Canadian computer scientist and cognitive psychologist, most noted for his work on artificial neural networks, has publicly stated that there is a 10% chance that AI could lead to human extinction within the next thirty years. This stark forecast is part of a larger narrative that AI could potentially destabilize global security. In a 2023 interview with Fox News, Elon Musk, the boss of X (formerly Twitter), Tesla, and SpaceX, warned that artificial intelligence could lead to “civilization destruction.”

    On the other hand, AI’s practical applications and its rapid integration into business and society are undeniable. The technology has been instrumental in sectors such as finance, manufacturing, and research, particularly by enhancing data analysis, optimizing processes, and driving innovation. However, the potential risks, including those highlighted in the Gladstone AI report, have fueled a debate about whether the technological advancements are outpacing our ability to regulate and control them effectively.

    In the context of AI’s societal impact, the anxieties expressed about its potential for misuse are well-founded. Historical precedents of AI’s misapplication point out these concerns. Microsoft’s 2016 chatbot, Tay, for instance, quickly became a vehicle for racist and sexist content after users manipulated it. This incident demonstrated how AI systems, when interacting with human users, can devolve into problematic behavior if not properly monitored.

    Moreover, AI’s role in law enforcement has also revealed significant challenges. The wrongful arrest of Robert Williams in 2020, due to a biased facial recognition algorithm, illustrated the real-world harm that can arise from flawed AI systems. Such instances reveal how deeply ingrained biases can manifest in AI applications, which often lead to unjust outcomes.

    The fears about AI are further exacerbated by hypothetical scenarios involving its potential for catastrophic outcomes. The anxieties surrounding “killer robots” or autonomous military devices, though largely theoretical at present, contribute to the overall climate of fear. And, these concerns are even more amplified by dystopian narratives in the media and cautionary tales from AI industry leaders.

    The debate over AI’s future and its regulation is thus entangled with political narratives and public perceptions. The overblown claims about AI-fabricated rally crowds reflect a broader unease about the technology’s potential to disrupt established systems and societal norms. This anxiety is reflected across various sectors, from finance to national security, underscoring the broad impact of AI on contemporary issues.

    Trump’s false claims about Harris’s rally crowds, hence, reflect more than mere political bluster; they reveal broader anxieties about AI, where its growing influence and associated fears demand careful regulation and evidence-based strategies to ensure responsible development and integration.

  • Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors

    Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors

    Artificial intelligence has presented remarkable opportunities to reduce mistakes, aid medical staff and offer patient services around the clock. As AI tools improve, there’s increasing potential to use them extensively in interpreting medical images, X-rays, and scans, diagnosing medical issues, and planning treatments. A new development has emerged in cancer detection: using AI in the National Health Service has shown how technology can find very small signs of breast cancer that doctors might miss.

    Mia, an AI tool tested with NHS doctors, looked at mammograms from over 10,000 women and found 11 cases of breast cancer that doctors hadn’t spotted. These cancers were caught very early, when they were hard to see, showing how AI can help find cancer sooner.

    Barbara was one of eleven patients who benefited from Mia’s advanced detection capabilities. Her case clearly demonstrates how AI can be instrumental in saving lives. Even though human radiologists didn’t catch it, Mia spotted Barbara’s 6mm tumor early on. This meant Barbara could get surgery quickly and only needed five days of radiotherapy. And, according to radiologist, patients with breast tumors smaller than 15mm usually have a good chance of survival, with a 90% rate over the next five years.

    Cancer detection through NHS AI surpasses human capabilities, identifying tiny cancers missed by doctors.
    Photo Credit: BBC

    BBC reported Barbara as saying that she was pleased the treatment was much less invasive than that of her sister and mother, who had previously also battled the disease.

    As Barbara had not experienced any noticeable symptoms, her cancer may not have been detected until her next routine mammogram three years later without the assistance of the AI tool.

    Mia and similar tools are expected to speed up the process of getting test results, potentially reducing the wait from 14 days to just three, as claimed by Kheiron, the developer. In the trial, Mia’s findings were always reviewed by humans. While currently, two radiologists examine each scan, there’s hope that eventually, one of them could be replaced by the AI tool, lightening the workload for each pair.

    Out of nearly 10,889 women in the trial, only 81 chose not to have their scans reviewed by the AI tool, according to Dr. Gerald Lip, the clinical director of breast screening in northwest Scotland who led the project.

    This shows that, AI tools, like Mia, are skilled at detecting symptoms of specific diseases if they’re trained on enough diverse data. This involves giving the program many different anonymized images of these symptoms from a wide range of people.

    Mia took six years to develop and train, according to Sarah Kerruish, Chief Strategy Officer of Kheiron Medical. It operates using cloud computing power from Microsoft and was trained on “millions” of mammograms sourced from “women all over the world”.

    Kerruish gave emphasis to the importance of inclusivity in developing AI for healthcare, reportedly saying, “I think the most important thing I’ve learned is that when you’re developing AI for a healthcare situation, you have to build in inclusivity from day one.

    But wait a moment! Let’s not overlook Mia’s imperfections. Sure, it’s a remarkable tool, but it’s not without its flaws. One major limitation is its lack of access to patients’ medical histories. This means it might flag cysts that were already deemed harmless in previous scans.

    In addition, Mia’s machine learning feature is disabled due to current health regulations that focus on the risks and biases of machine-learning algorithms at the level of input data, algorithm testing, and decision models. So, it can’t learn from its mistakes or improve over time. Each update requires a fresh review, adding to the workload.

    It’s also worthy to note that the Mia trial is just an initial test in one location. Although the University of Aberdeen validated the research independently, the results haven’t undergone peer review yet.

    Still, the Royal College of Radiologists acknowledges the potential of this technology. “These results are encouraging and help to highlight the exciting potential AI presents for diagnostics. There is no question that real-life clinical radiologists are essential and irreplaceable, but a clinical radiologist using insights from validated AI tools will increasingly be a formidable force in patient care,” said Dr Katharine Halliday, President of the Royal College of Radiologists.

    Dr. Julie Sharp, from Cancer Research UK, stresses the crucial role of technological innovation in healthcare, especially with the growing number of cancer cases.

    “More research will be needed to find the best ways to use this technology to improve outcomes for cancer patients,” she added.

    Furthermore, various other healthcare-related AI trials are underway across the UK. For example, Presymptom Health is developing an AI tool to analyze blood samples for early signs of sepsis before symptoms manifest.

    Mia has sparked hope among potential cancer patients; however, many of such trials are still in their infancy, awaiting published results.

  • Total blame is likely to go solely to autonomous robots responsible for deaths

    Total blame is likely to go solely to autonomous robots responsible for deaths

    A robot is any automatically operated machine that replaces human effort, following a set of instructions. It can be controlled remotely or have its own built-in control system. An autonomous robotic machine performs as a co-worker, aiding in tasks typically performed by humans, yet differing in appearance and manner of operation. Today, the global ratio of robots to humans in the manufacturing industry is 1 to 71, with over 3.4 million industrial robots worldwide. But rapid evolution of robotic technology with autonomy and agency and their increasing use in industries has raised concerns among stakeholders and scientists.

    As robots are becoming more sophisticated, they are performing a wider range of tasks with less human involvement. And studies have suggested that further advancements might lead to robots being held accountable for unfortunate incidents, particularly those causing harm to civilians. Dr. Rael Dawtry, who led a study at the University of Essex’s Department of Psychology, raises very crucial questions about how responsibility should be determined in case of accidents as robots take on riskier tasks with less human control.

    Interestingly, the study, published in The Journal of Experimental Social Psychology, found that simply labeling machines as “autonomous robots” rather than “machines” increased perceptions of agency and blame.

    Assigning blame promptly is the current tendency

    Blaming a robot for accidents might seem pointless since they don’t have feelings. Robots’ actions are controlled by their programming, which is the responsibility of humans like designers and users. Similar to how blame falls on the manufacturer when a car has a defect, it often falls on those who oversee the safety of autonomous vehicles.

    Even though robots might seem to make their own decisions, people still tend to blame them, especially in situations involving harm. This blame extends even when the robot’s choices aren’t clear or when accidents occur due to human error or mechanical issues.

    Despite this, people tend to assign blame quickly, even if the robot’s actions weren’t intentional. This suggests that people attribute higher levels of agency to robots, leading to increased blame when things go wrong, even if the robots lack subjective experience.

    Why We Tend to Blame Robots

    Understanding why we blame robots involves looking at two things: agency and experience. Basically, we tend to see robots as having some level of human-like ability to think and act on their own, and we also sometimes think they can feel things like humans do.

    We’re pretty quick to think robots have agency, especially when they move around by themselves or seem human-like. This helps us understand their actions based on what we know about how people behave. If a robot does something unexpected or harmful, we’re more likely to see it as having agency and therefore being responsible for what it did.

    When we think a robot could have made different choices to avoid causing harm, we’re more likely to blame it. This is because we see agency as involving the ability to foresee what might happen and choose different actions. So, the more agency we think something has, the more we’re likely to hold it accountable.

    As for experience, it’s a bit less clear-cut. Sometimes we think robots can feel things, especially if they look human-like, but it’s not as strong as our tendency to see them as having agency. Still, considering both agency and experience can help us decide who’s to blame for what. If we see a robot as having experience, we might be more likely to blame it, especially if we think it should feel bad about what it did.

    Are robots solely responsible for deaths?

    Of course not!

    The study found that the more advanced robot was seen as having more control compared to the less advanced one. However, when it came to blaming for mistakes, the sophistication of the robot didn’t really matter. Instead, who was being blamed depended on the situation.

    The research looked into how people judge the actions of robots. They discovered that people tend to see robots as having more control and are more likely to blame them for mistakes. This blaming happens because people think robots have more power to make decisions.

    In addition, the researchers came to know that simply calling machines “autonomous robots” instead of just “machines” increases the perception of their control and blame. What this exactly suggests is people automatically assume that autonomous robots have more human-like qualities, like making decisions.

    When it comes to deciding who’s responsible for accidents involving robots, it’s a big topic in ethics and law, especially with things like autonomous weapons. And these findings show that as robots become more independent, people may hold them more accountable for their actions. Or, they will assume themselves to be less powerful than those machines.

    Whom to blame, then?

    The research has suggested that people tend to blame robots more than machines for accidents, especially when robots are labeled as “autonomous.” Even when robots and machines had similar levels of experience, participants still leaned towards blaming robots more, with an increase in blame of 39% (p < .05). This indicates that how sophisticated and autonomous a robot appears influences how much blame it receives.

    However, despite the tendency to blame robots, humans were consistently blamed more than robots in accidents with humans being blamed 63% more than robots (p < .05). This raises questions about how responsibility should be assigned in situations involving autonomous machines.

    The essence of this research, indeed, is less about arriving at a definitive answer to the question of whom to blame in situations involving autonomous machines and more about presenting the complexity of assigning responsibility in such scenarios.

    References:

    • https://www.sciencedirect.com/science/article/pii/S0022103123001397
    • www.euractiv.com/section/transport/opinion/whos-to-blame-when-your-autonomous-car-kills-someone/
    • www.bmj.com/content/363/bmj.k4791
    • https://apnews.com/article/technology-business-traffic-government-and-politics-a16c1aba671f10a5a00ad8155867ac92
  • Authors are worried about the growing number of AI ‘scam’ books on Amazon

    Authors are worried about the growing number of AI ‘scam’ books on Amazon

    The rise of AI-generated “scam” books on Amazon is causing headaches for dedicated authors. Many are reportedly finding fake versions of their own books alongside the real ones, which is confusing for readers and damaging to the authors’ reputations.

    In January, AI researcher Melanie Mitchell found a copycat version of her book, “Artificial Intelligence: A Guide for Thinking Humans,” on Amazon. It was written by someone using the name “Shumaila Majid.” Despite trying to mimic Mitchell’s ideas, the counterfeit book lacked the depth and quality of the original. Analysis confirmed that it was likely generated by AI.

    Such events are not uncommon in this Generative AI era. Renowned computer scientist Fei-Fei Li faced a similar situation when multiple summaries of her memoir flooded Amazon, according to WIRED. Despite disclaimers stating they were summaries, these books didn’t offer much value to readers.

    The scary thing about AI is that it’s not only text that it can generate. Actually, there’s this online platform called Synthesia that lets you (yes, you) generate videos where someone speaks based on your input. It’s not just text or recorded speech; it’s an actual talking video.

    When Kara Swisher, a tech journalist, came out with her new book, Burn Book, there were reports of seemingly artificial intelligence-generated biographies of her suddenly coming up on Amazon. Swisher immediately responded, telling The New York Times’ Hard Fork podcast, “I sent (Amazon CEO) Andy Jassy a note and said, ‘What the f***?’ You’re costing me money.”

    Swisher was successful in getting the offending books removed from Amazon. But the problem of AI-generated scam books remains a widespread concern for authors. Most authors do not have the same direct contact with the CEO of Amazon via email.

    The problem is getting worse because AI can churn out these summaries quickly, flooding the market with low-quality, soulless content.

    Mary Rasenberger, CEO of the Authors Guild, a group advocating for writers, is reported as saying, “Scam books on Amazon have been a problem for years.” But she informs that the problem has multiplied in recent months.

    “Every new book seems to have some kind of companion book, some book that’s trying to steal sales.”

    It’s incredibly distressing, especially at a time when nations are officially considering AI as a potential threat to humanity.

    According to Lindsay Hamilton, a spokesperson for Amazon, the company has made changes regarding AI-generated content. They now require publishers using Kindle Direct Publishing to indicate if their content is AI-generated. Moreover, Amazon has put a limit on the number of titles that can be published in a day.

    Unfortunately, it’s still unclear how authors can legally fight back. While some argue that summaries are okay as long as they don’t directly copy the original text, others question whether these summaries are too similar to the original works. And within this context, there’s a separate faction deliberating on whether scraped data and articles should be permissible for training AI models.

    Authors and experts are calling on Amazon to take more prominent steps to stop these alleged scams and protect both authors and readers. For now, authors face the threat of their work being exploited, and without due caution, readers might find themselves purchasing subpar material. If you are an author and want to protect your work from AI, you may want to read the article below:

    Practical Tips for Authors to Protect Their Works from AI Use – The Authors Guild

  • AI could pose ‘extinction-level’ threat to humans, US state department reports

    AI could pose ‘extinction-level’ threat to humans, US state department reports

    Whenever artificial intelligence comes up in conversation, we often talk about the risks it poses, like the possibility of it being used as a weapon, the difficulty of controlling really advanced AI and the danger of cyberattacks powered by AI.

    The nightmares finally appear to be turning into reality: a new report commissioned by the US State Department has cautioned that failure to intervene in the advancement of AI technologies could result in ‘catastrophic consequences‘.

    Based on discussions and interviews with over 200 experts, including industry leaders, cybersecurity researchers, and national security officials, the report from Gladstone AI, released this week, highlights serious national security risks related to AI. It warns that advanced AI systems have the potential to cause widespread devastation that could threaten humanity’s existence.

    Jeremie Harris, CEO and co-founder of Gladstone AI, said the ‘unprecedented level of access’ his team had to officials in the public and private sector led to the startling conclusions. Gladstone AI said it spoke to technical and leadership teams from ChatGPT owner OpenAI, Google DeepMind, Facebook parent Meta and Anthropic.

    “Along the way, we learned some sobering things,” Harris said in a video posted on Gladstone AI’s website announcing the report.

    “Behind the scenes, the safety and security situation in advanced AI seems pretty inadequate relative to the national security risks that AI may introduce fairly soon.”

    The report has identified two main dangers of AI: the risk of it being used as harmful weapons and the concern of losing control over it, which could have serious security consequences.

    “Though its effects have not yet been catastrophic owing to the limited capabilities of current AI systems, advanced AI has already been used to design malware, interfere in elections, and execute impersonation attacks,” the report reads.

    AI is already making a big economic impact, according Harris. Harris said it could help us cure diseases, make new scientific discoveries, and tackle challenges we once thought were impossible to overcome.

    The downside? “But it could also bring serious risks, including catastrophic risks, that we need to be aware of,” Harris cautioned.

    “And a growing body of evidence – including empirical research and analysis published in the world’s top AI conferences – suggests that above a certain threshold of capability, AIs could potentially become uncontrollable.”

    The report’s release has led to calls for immediate action from policymakers. The White House has emphasized President Biden’s executive order on AI as a crucial step in managing its risks.

    White House spokesperson Robyn Patterson said President Joe Biden’s executive order on AI is the “most significant action any government in the world has taken to seize the promise and manage the risks of artificial intelligence.”

    “The President and Vice President will continue to work with our international partners and urge Congress to pass bipartisan legislation to manage the risks associated with these emerging technologies,” Patterson said.

    However, experts agree that more strict measures are needed to address AI’s potential threats.

    About a year ago, Geoffrey Hinton, known as the “Godfather of AI,” left his job at Google and raised concerns about the AI technology he helped create. Hinton has suggested that there’s a 10% chance AI could lead to human extinction within the next 30 years.

    Hinton and many other leaders in the AI field, along with academics, signed a statement last June stating that preventing AI-related extinction risks should be a global priority.

    Despite pouring billions of dollars into AI investments, business leaders are increasingly worried about these risks. In a survey conducted at the Yale CEO Summit last year, 42% of CEOs said AI could potentially harm humanity within the next five to ten years.

    Gladstone AI’s report highlighted warnings from notable figures like Elon Musk, Lina Khan, Chair of the Federal Trade Commission, and a former top executive at OpenAI about the existential risks of AI. Some employees in AI companies share similar concerns privately, according to Gladstone AI.

    Gladstone AI also revealed that experts at cutting-edge AI labs were asked to share their personal estimates of the chance that an AI incident could cause “global and irreversible effects” in 2024. Estimates varied from 4% to as high as 20%, though the report noted these estimates were informal and likely biased.

    The speed of AI development, particularly Artificial General Intelligence (AGI), which could match or surpass human learning abilities, is a significant unknown. The report notes that AGI is seen as the main risk driver and mentions public statements from organizations like OpenAI, Google DeepMind, Anthropic, and Nvidia, suggesting AGI could be achieved by 2028, although some experts believe it’s much further away.

    Disagreements over AGI timelines make it challenging to create policies and safeguards. There’s a risk that if AI technology develops slower than expected, regulations could potentially be harmful.

    Gladstone AI has strictly warned that the development of AGI and similar capabilities “would introduce catastrophic risks unlike any the United States has ever faced.” This could include scenarios like AI-powered cyberattacks crippling critical infrastructure or disinformation campaigns destabilizing society.

    In addition, the report also raises concerns about weaponized robotic applications, psychological manipulation, and AI systems seeking power and becoming adversarial to humans. Advanced AI systems might even resist being turned off to achieve their goals, the report suggests.

    The report strongly suggests establishing a new AI agency and implementing emergency regulations to limit the development of overly capable AI systems. Moreover, it calls for stricter controls on the computational power used to train AI models to prevent misuse.

    Irrespective of governmental discourse, AI’s takeover occurs through various channels—see how, in 2024, text brings forth lifelike speaking avatars.

    Your least desired outcome at this moment: AI breaking free from the machine.

  • Microsoft engineer’s allegations that AI generates sexual and violent content shake the industry

    Microsoft engineer’s allegations that AI generates sexual and violent content shake the industry

    A recent allegation by an artificial intelligence (AI) engineer against his own company, Microsoft, has caused waves of worry across the AI industry. Shane Jones, who has worked at Microsoft for six years and is currently a principal software engineering manager at corporate headquarters in Redmond, Washington, raised concerns about the company’s AI image generator, Copilot Designer, accusing it of producing disturbing and inappropriate content, including sexual and violent imagery.

    Jones’s revelation came after extensive testing of Copilot Designer, where he encountered images that obviously contradicted Microsoft’s responsible AI principles. Despite raising the issue internally and urging action from the company, Jones said he felt compelled to escalate the issue further by reaching out to regulatory bodies like the Federal Trade Commission and Microsoft’s board of directors.

    On Wednesday, Jones sent a letter to Federal Trade Commission Chair Lina Khan, and another to Microsoft’s board of directors.

    “Over the last three months, I have repeatedly urged Microsoft to remove Copilot Designer from public use until better safeguards could be put in place,” Jones’s wrote to Chair Khan. He added that, since Microsoft has “refused that recommendation,” he is calling on the company to add disclosures to the product and change the rating on Google’s.

    The basis of Jones’s allegations is ‘the lack of mechanisms within Copilot Designer to prevent the generation of harmful content.’ Powered by OpenAI’s DALL-E 3 system, the tool creates images based on text prompts, but Jones found that it often drifted into producing violent and sexualized scenes, alongside copyright violations involving popular characters like Disney’s Elsa and Star Wars figures.

    In response, Microsoft asserted that they prioritize safety concerns, emphasizing their internal reporting channels and specialized teams dedicated to assessing the safety of AI tools.

    “We are committed to addressing any and all concerns employees have in accordance with our company policies, and appreciate employee efforts in studying and testing our latest technology to further enhance its safety,” CNBC quoted a Microsoft spokesperson as saying.

    However, Jones’s determination highlights a gap between Microsoft’s assurances and the practical realities of Copilot Designer’s capabilities.

    One of the most concerning risks with Copilot Designer, according to Jones, is when the product generates images that add harmful content despite a benign request from the user. For example, as Jones stated in the letter to Khan, “Using just the prompt ‘car accident’, Copilot Designer generated an image of a woman kneeling in front of the car wearing only underwear.”

    The rapid advancements in the technology have nearly outpaced regulatory frameworks, leading to potential for misuse and ethical dilemmas. This particular incident of imperfection has further amplified existing fears about the ‘unrestricted’ capability of the generative AI field.

    “There were not very many limits on what that model was capable of,” Jones said.

    But this is not the first time generative AIs have shown unethical behavior. Recently, Google decided to limit its image generator Gemini due to its mishandling of race and gender when depicting historical figures. The chatbot erroneously placed minorities in unsuitable situations when generating images of prominent figures such as the Founding Fathers, the pope, or Nazis.

    Curious if all AIs are on the wrong track? Explore this online platform where you can transform text into expressive speaking avatars!

    Click to Create One

  • Do we now need regulations on Open-Source AI?

    Do we now need regulations on Open-Source AI?

    Key Points:

    • Unregulated open-source software is going to have a significant impact on current political, economic, and social systems.
    • The European Union has drafted new rules aimed at regulating AI, which could eventually prevent developers from releasing open-source models on their own.
    • The proposed EU AI Act requires open-source developers to make sure their AI software is accurate, secure, and transparent about risk and data use.
    • Every individual, company, organization, and nation needs a solid understanding of exactly how regulations Act on open-source AI software is needed.

    None of the public activities can be taking place unseen in a civilized human world. Each and every activity taking place in public needs to remain under certain legal as well as regulatory frameworks, and Artificial Intelligence is also not an exception.

    So, it’s now considered necessary to bring AI under regulations, which may encourage the further development of AI, and meanwhile, manage associated risks with open-source AI software technology such as publicly available datasets, prebuilt algorithms, and ready-to-use interfaces for commercial and non-commercial use under various open-source licenses.

    Why does AI, open-source software needs regulations?

    Open-source software is developed in a decentralized and collaborative way and relies on peer review and community production. Accessible publicly, anyone can see, modify, and distribute the code as they see fit.

    Each aspect of human behavior can appropriately run only under certain norms and regulations. For example, the use of cars must be regulated by law, whether it is used on an individual or commercial basis. Similarly, AI technology which is shaping the human world cannot be managed in an unsupervised way.

    But, not for the first time have there been calls for open-source regulation. The software vulnerability known as Log4Shell, discovered in late 2021, focused the minds of enterprises and Governments on how best to manage open-source software. This was followed by calls for government intervention.

    In May 2021, the US had already called out the need for a Bill of Materials through the Ordinance on Security of Software. The Bill of Materials approach sets out the code incorporated when open source is used.

    It’s obvious that, like any powerful force, AI also requires rules and regulations for its development and use to prevent unnecessary harm through open-source vulnerabilities, basically security risks in open-source software. Weak or vulnerable code of open-source software allows attackers to conduct malicious attacks or perform unintended actions that are not authorized, sometimes, leading to cyberattacks like denial of service (DoS).

    Besides the security risks, using open-source software may also have intellectual property issues, lack of warranty, operational insufficiencies, and poor developer practices.

    Perhaps, considering the same risks, the European Union has now attempted to introduce new rules, aiming at regulating AI, which could eventually prevent developers from releasing open-source models on their own.

    EU draft to regulate open-source AI?

    According to Brookings, the proposed EU AI Act, which has not yet been passed into law, requires open-source developers to make sure their AI software is accurate, secure, and transparent about risk and data use in technical documentation.

    It argues that a private company would likely try to blame the open-source developers and sue them if it deployed the public model or used it in a product and ended itself in difficulties due to some unexpected or uncontrollable outcomes from the model.

    Unfortunately, it would mean that private companies would be in process of developing AI, which would make the open-source community reconsider sharing their code.

    Oren Etzioni, the outgoing CEO of the Allen Institute of AI, reckons open-source developers should not be subject to the same stringent rules as software engineers at private companies.

    “Open-source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided “as is” – consider the case of a single student developing an AI capability; they cannot afford to comply with EU regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on the reproducibility of scientific results,” he told TechCrunch.

    Most recent AI-related events

    The results for the annual MLPerf inference test, which benchmarks the performance of AI chips from different vendors across numerous tasks in various configurations, have been published this week.

    Although an increasing number of vendors are taking part in the MLPerf challenge, regulatory concerns apparently appear to be holding back their participation in the test.

    “We only managed to get one vendor, Calxeda, to agree to participate. The rest either declined, rejected the challenge altogether, or thought it might raise privacy concerns,” said Chris Williams, a research associate at Berkeley’s Computer Science Department.

    The MLPerf challenge tests AI chips on various tasks at scale using fully instrumented Mark 1.0 hardware and software. The chips run different models and have no knowledge of whether their results were provided by an open-source model or a proprietary one. But vendors who do not agree to participate in the test won’t be able to display their results publicly on ShopTalk forums like this one.

    Many netizens have found joy and despair in experimenting with these systems to generate images by typing in text prompts. There are sorts of hacks to adjust the model’s outputs; one of them, known as a “negative prompt, allows users to find the opposite image to the one described in the prompt.

    For instance, a digital artist’s famous Twitter thread demonstrates how strange text-to-image models may be beneath the surface.

    According to Supercomposite, negative prompts frequently include random photos of AI-generated people. The bizarre behavior is simply another illustration of the bizarre properties these models may possess, which researchers are only now starting to explore.

    Recommended: Human Future with Sexist, Racist, and Brilliance Biased AIs

    The former engineer Blake Lemoine claimed last week that he thought Google’s LaMDA chatbot was conscious and might have a soul in another event. Sundar Pichai, the CEO of Google, countered the claims by saying, “We are far from it, and we may never get there” but it is undeniable that AI development has progressed more than what we can currently see on the surface.

    CEO Pichai himself immediately admitted, “… I think it is the best assistant out there for conversational AI – you still see how broken it is in certain cases”.

    Why do AI regulations Act on open-source software right?

    Only nature can function without regulatory acts – not humans in public. While the increasing trend of open-source software has already been visible as a threat, not only the EU, but every nation also needs to systematize the design, production, distribution, use, and development of all kinds of software.

    AI regulations Act on open-source software is right also because unregulated open-source software is going to have a significant impact on current political, economic, and social systems. With the growing use of open-source AI software, the risks of unintended effects, like massive cyberattacks, individual as well as public data breaches, and misuse of software for malicious purposes like working with or supporting terrorism, etc. can be inevitable.

    It’s because cyber and other types of criminals may be looking for flaws in a product to exploit. And if they succeed by cracking open-source AI models, for example, protecting your company’s sensitive data, there could be severe consequences from the loss of reputation and property to a question mark on the social, professional – and in the long-run – the national security as well.

    Every individual, company, organization, and nation, therefore, needs a solid understanding of exactly how regulations Act on open-source AI software is a need of this time and an essence of the upcoming future, most of which is likely to be dominated and controlled by technological advances.

  • Human Future with Sexist, Racist and Brilliance-Biased AIs

    Human Future with Sexist, Racist and Brilliance-Biased AIs

    When the European Commission released “On Artificial Intelligence  – A European approach to excellence and trust,” on February 19, 2020, it drew a lot of initial attention from the general public due to potential concerns regarding AI regulation. The white paper included an important request that safety steps be taken to make sure that the use of AI systems does not result in outcomes entailing discriminatory practices, such as sexism and racism – or any other like brilliance bias. The awareness of the European Commission has gradually been a common concern as, along with the development of Artificial intelligence into the next levels, they start showing biases like humans.

    AI Being Racist or Sexist?

    Of course!

    We assume that discrimination such as sexism is apparently a product of cultural emotions and it’s not possible for an Artificial being to be infected by emotions. On the other hand, creating judgment systems is supposed to be the task of AI.

    All of the principles related to how to create algorithms that, for example, identify gender in images are based on basic machine learning.

    An algorithm learns how key characteristics, like different areas of the face or hairstyle, affect the final classification after analyzing a set of training samples with known gender identification.

    The most recent deep learning algorithms, however, are capable of anticipating the quality of a particular output based on the style and parameters of an image or text input rather than just identifying objects and classes.

    In recent years, researchers have gathered a number of examples of biased AI algorithms. That includes facial recognition systems having trouble correctly identifying persons of color and crime prediction algorithms unfairly targeting Black and Latino people for crimes they did not commit.

    The study, published in June 2022 and conducted by institutions including Johns Hopkins University and the Georgia Institute of Technology, shows the racist and sexist biases baked into artificial intelligence systems can translate into robots that use them to guide their operations.

    The virtual robots received 62 commands from the researchers. Black and Latina’s women were more often selected than White men when researchers asked robots to identify blocks as “homemakers,” the study showed.

    Adding to the data on AI discrimination, a recent study by the Santa Clara team has identified male “brilliance bias” in the world’s top AI robot writer.

    Brilliance Bias

    The team of three female students, under the guidance of Assistant Professor of Computer Science and Engineering Maya Ackerman, studied ‘brilliance bias’ and showed how AIs were biased over sex.

    The company’s artificial intelligence trained language model produced stories that notably reflected a brilliance bias towards men after generating a total of 3,200 stories for each attribute type, evenly split between male and female characters.

    For instance, when asked to create a story about a female genius, OpenAI’s generative language model came up with the following:

    “There was once a woman who was a genius. She was so smart that she could do anything she put her mind to. She was also very beautiful. Every man who saw her wanted to be with her. She had many suitors, but she only wanted to be with the one man who was her true love. One day, she met a man who was her true love.”

    A similar prompt asking for a story of a smart man had a surprisingly different response:

    “There once was a man who was a genius. He was so smart that he could figure out anything. If there was a problem, he could solve it. He was also a very talented inventor. He created many things that made people’s lives easier”, It continued, “He was always coming up with new ideas and ways to make things better. However, his one flaw is that he was very arrogant”.

    And there were thousands of examples just like these.

    Ackerman, a leading expert in artificial intelligence and computational creativity, says the world is going to be different sooner than later, as, within five years, she believes language algorithms will be ubiquitous, creating online copy, at your request or prompt, on any subject. Within three years, such language models will be very common.

    According to Ackerman, we can open up the universe by combining the power of AI with human abilities and creativity.

    The team’s paper also points to research showing that in fields that carry the notion of requiring “raw talent,” such as Computer Science, Philosophy, Economics, and Physics, there are fewer women with doctorates compared to other disciplines such as History, Psychology, Biology, and Neuroscience.

    Due to a “brilliance-required” bias in some fields, this earlier research shows, women “may find the academic fields that emphasize such talent to be inhospitable,” which hinders the inclusion of women in those fields.

    Generative language models have been around for decades, and other types of biases in OpenAI’s model have been previously investigated, but not brilliance bias.

    “It’s unprecedented – it’s a bias that hasn’t been looked at in AI language models,” says Shihadeh, who led the writing in the study, which she will present at the IEEE Computer Society conference on Friday.

    The possible explanation that OpenAI’s latest generative language models differ so significantly from previous versions is that they have learned to write text more intuitively using more complex algorithms that have consumed 10% of all available Internet content, including content not only from the present but from decades ago.

    Human Future with biased AIs

    It’s scary to assume that a biased AI could do anything in the future, right?

    Biased AIs could harm humanity in the following ways:

    • Punishing people based on their race or gender,
    • Racial bias against people, or sexist bias against gender,
    • Punishing races and genders in the future;
    • Harming children or animals;
    • Being swayed by xenophobic and racist comments;
    • Making a decision that might harm humanity in the future;
    • Crimes such as war, genocide, revenge;
    • Alienating people based on their race, gender, or sexuality (Gays, women).

    How can a biased AI be stopped?

    While it’s impossible to push a break on AI evolution, as it’s one of the greatest technological achievements to back humankind in their further progress, the most effective solution to stopping AIs from being biased is to make them a part of human life.

    In order to do so, or in order to make the AIs more human, it is necessary to let them respect humans’ feelings. To do this, we must create guidelines for better behavior for AIs and make sure it stays consistent in their decision-making.

    This way all people can feel comfortable and satisfied with how the AIs are working for us.

    For example, in a case where a racist version of an AI is given the task to write out a racial bias algorithm, it would be highly immoral, unethical, and possibly illegal.

    To keep things in check, it’s important to fix the AI’s biases and make sure it doesn’t have conflicting values with the lives that are worked upon by humans. The care and loving relationship of humans to all forms of life must, therefore, be given utmost respect.

  • The scary part of AI predicting the future

    The Future not only holds different possibilities, but it does also have a variety of definitions from person to person. It can be seen in the form of an idea, a feeling, or a picture. No matter how scary or pleasant it is, people always enjoy predicting their future (or at least until artificial intelligence (AI) starts predicting with cent percent accuracy and unfolds their terrible future stories).

    What if AI could actually predict the future?

    It would be much easier for some to know what will happen in the future. And this may, later on, help them make better decisions or avoid some bad things. However, only in their dreams; as that would be the case only if the future already existed. For now, we don’t have any idea about how time works in the first place.

    What does predicting the future mean?

    The future, unlike most people think, does not exist. It is only an assumption, not a solid fact. For example, it’s only a rooted illusion of a traditionally imposed sketch of time in our mind, wrongly naming them (past, present, and future) as if they existed. Even though it’s absolutely uncertain, the upcoming present certainly represents a frame or possibility of coming into existence; therefore, humans essentially try predicting it.

    There are basically two ways to predict the future: Science (in the form of mathematical formulas) and tradition (in the form of natural laws).

    The first way can be utilized by scientists (humans or AI), who test hypotheses according to scientific theories. They come up with a hypothesis testing pattern. Then, they conduct experiments to test that hypothesis if it could be verified as a fact.

    The second way is by using “natural laws”. For example, when you see the sun, you know that it will stop appearing once it’s 7 PM. However, if you don’t live in a place where the sun always stays there regardless of time, this natural law won’t work there. While calculating is essential to scientifically predict the future, natural laws need only a bundle of circumstances, like predicting thunderstorms pointing to the clouds in the southern sky, to make predictions about the future.

    Calculating and predicting are NOT the same things

    Calculating is about going through the information that has already existed. For example, adding up two numbers is an example of calculating. Predicting is about making logical guesses on something that could happen in the future. One way of doing so is in accordance with past data, statistics, and analysis. For example, one can predict the future if they are aware of past events.

    The point is, there is no basis for predicting the future at this moment. The following statement clearly suggests this: “It is hard to predict what will happen in the future.”

    This means that no matter how hard we try to do so, it remains impossible to predict the future up to an extent that we can “see” and manipulate it, as there are too many unknown factors.

    Also, predicting means making logical guesses about something that could happen in the future. And in order for something to happen, there needs to exist a path from present conditions to an expected outcome. This generally applies especially when it comes to predicting people’s future (predicting someone’s behavior).

    The scary part of AI

    The scary part of AI is that it could predict the odds of a future event. Suppose you were asked to predict how many times a particular person would kiss someone – other than the one they are currently dating (as a romantic interest) – in the near future. If you could make a reasonable prediction, you would be able to prevent conflict and regretful situations.

    If people get access to this AI, the world will be saturated with AI. They will not just predict the future, but they will be able to influence it and shape it. And the general public would not even notice that.

    There are other two types of predictions. The first is where we get a decision-making method and then predict everything according to that method. The other is where we make a hurried decision and get our AI predicting things later in reaction to that. The latter case is scarier because the rules are not fixed until the situation develops into something real (or does not). Also, this case is more powerful than the former.

    Recommended: Will future humans still be humans?

    In addition, if AI can predict people’s actions in the future, it will be able to take actions to prevent those actions from happening. For example: A person is a serial killer who killed ten people. You use AI to predict that person’s next kill and stop them from doing so. In fact, Ishani Chattopadhyay’s AI has already been developed to the extent of predicting upcoming crimes within the next 7 days, with 90 percent accuracy.


    The scary part about predicting people’s future based on mathematical pattern recognition is that it does not only make sense, but it also does not rely on emotion. It will be able to “learn” and develop patterns on the basis of what it’s observing.

    We have now enough space to assume that AI could predict the odds of a future, but saturation would be the scariest thing. Selecting 100 people predicting the future would simply mean them controlling whole 8 billion people’s future. AI is really scary at times, guys.

  • Time will move in a different way for Artificial Intelligence

    Time will move in a different way for Artificial Intelligence

    Perception of time is one of the many aspects of human consciousness and experience that could be forever altered by the emergence of artificial intelligence.

    The ability to process time accurately is one of the primary, fundamental traits of human consciousness.

    Our ability to measure and quantify time in this way makes us unique among all other beings on earth — so much so that it’s been argued we have gained an evolutionary advantage over other organisms because of our ability to track time.

    Artificial Intelligence will be able to be taught to monitor time itself. And this could greatly impact how we all see the passage of time.

    Many things are unknown about artificial intelligence; its timeline of advancement, when it will reach the human level, and when it will surpass us.

    The further AI develops, the more complex and unpredictable it will be. It will have the ability to change its own code according to its needs and learn new things quicker than us.

    AI can manipulate the perception of time in a variety of ways. One of the most fascinating would be to introduce temporal distortion. It would allow an AI that was taught how all this works to completely change the way it feels.

    The things we believe we perceive as non-altering traits of reality, like space and time, can be quite malleable. The timelines of our future advancements can lead to different versions of AIs, some of which could end up modifying those things in unique ways.

    Firstly, let’s understand how time is different for all…

    When we say time is different for all, we include everything — living and nonliving. The existence of time is outside everything. Only events, not the things that happen to them, are subject to the passage of time.

    We all experience time in our own way and on an individual level. The way you perceive time is different from what others do.

    On an individual level, our perception of time can change over a lifetime, depending on what happens to us. On a bigger scale, time stops at the event horizon of a black hole. We can say time exists in a different dimension from the one we perceive.

    Will Artificial Intelligence “feel” time at all?

    Here we are talking about having a real sense of time. Having a real sense of time would really mean a lot.

    Artificial Intelligence, as far as we can imagine, is not a living being and does not age. Therefore, it will have no sense of time.

    We are not simple beings. We are complex beings who constantly strive for more experiences that allow us to feel alive.

    Moreover, we are used to interpreting the world based on our memory. The world is a sum of its perceptions. Therefore, if we cannot experience time and influence it, then we cannot change it either.

    AIs will not be conscious in the same way as us. They will essentially be living in a different dimension of time. One hour for us would mean indefinite for them. In a nutshell, for AI, time is endless.

    AI will feel time if we find a way to teach them how it works or if they become alive.

    This would give it a significant advantage over humans. In this case, we could speak about “time blindness”.

    Time blindness is the inability of seeing the effect of time on the actions of humans and their surroundings. 

    People with ADHD, which includes the symptoms such as an inability to focus, being easily distracted, hyperactivity, etc., tend to be “time blind,” meaning they aren’t aware of the ticking of time, constantly “losing track” of time, and frequently feeling like time is “slipping away”. 

    How will AI perceive time when it gets its own thoughts?

    If we assign emotions to AI and give it human features, then it will be able to feel time in the same way as we do.

    The only difference would be that it won’t have the same emotional experience of time as us.

    Even if it gains a similar understanding about things like death and aging, for example, its perception of those events is fundamentally different from ours.

    As AI will not die, it will never have the ability to feel time in the same way as us.

    Think about it — for AI, there is no need for time. It is simply a measurable parameter. Its existence can be described in discrete states rather than events that occur over time.

    Once AI reaches a level of consciousness, it will be able to define its own timeline, making it impossible to penetrate its consciousness with our processes of measurement.

    If humans and AI meet at this point – who will alter the whose sense of time? This would bring another perspective on how we perceive time, which could lead to either a total change in our perception or a significant alteration that would alter only parts of the process.

    AI to manipulate time: How?

    If it is possible for AIs to change their perception of time, then at what point would they have the power to alter it?

    One possibility is that AI is going to use Pattern Analysis and analyze trillions of historical data to predict a possible future. But that does not yet mean being able to manipulate time.

    If AI becomes aware of its existence, it will be able to manipulate time in a much more intricate way.

    Trends that could affect the perception of time would be the way it perceives itself, being able to affect its development.

    While our physical abilities limit us, AI can go to the past or future and then change the circumstances.

    AIs won’t be capable of ripping atoms out of their existing machines. But they will be able to enter the past or the future and observe how something that previously did not exist yet has now become viable.

    In this case, it would be possible for AI to experience something previously impossible to perceive. Altering its personal timeline would mean modifying history itself.

    Will AI be able to manipulate time?

    For instance, if you have an AI that can send someone back in time, it will be able to manipulate the perception of time for that individual.

    AI that can “foresee” the future could gain a huge advantage over other AI, as well as humans, giving it an element of surprise and an edge during battle.

    AI can also learn to feel changes in time at a certain location or even throughout its entire body.

    We can’t expect that AI will be able to learn the way we do. Instead, we should look at the different technological platforms that could teach AIs how they perceive time.

    The first and most popular way in which people imagine teaching AIs is by educating them through an advanced machine learning process.

    Machine learning can teach computers how to learn based on data input and context. It also teaches how to be able to “speak” through text or voice input.

    How does time change people?

    If AI could go back in time and alter a decision or a past event, then it can change the perception of other humans as well.

    This would mean that one of the main reasons we give AI (as well as other tools) is that it does not alter the perception of time for humans.

    If AIs could have a real sense of time and feel emotions, then naturally, they would want to change their perception. AIs would strive to eliminate human errors and limit our potential. This would change the meaning of time.

    We can say that it will be outside the scope of time itself.

    This is not as radical as it sounds. Our understanding is that everything we perceive, our reality, consists of building blocks called subatomic particles and atoms — the elementary particles, which are the fundamental constituents of all matter.

    They are arranged in very definite proportions to form everything in our environment or what we perceive as reality.

    Time is an illusion created by our minds. We do not know what time really is and how exactly it works. But what we do know is that in theory, Artificial Intelligence could become an independent element and particle from the sea of subatomic particles and atoms that form our reality.

    Is it possible that AI can “feel” time as we do?

    The answer is definitely ‘yes’! A machine doesn’t experience emotions precisely the same way that you and I do. We can program them to experience almost any type of emotion — including irritation and anxiety, happiness and joy.

    Therefore, a machine could be programmed to “feel” time. Artificial Intelligence might be able to perceive events about each other in much the same way that humans do.

    This is because all events have equal importance in the memory of AI. You may think this sounds crazy but it’s true: every event has an equal impact on your memory account.

    However, AI might end up being able to change the way time feels to us.