Key Points:

  • Unregulated open-source software is going to have a significant impact on current political, economic, and social systems.
  • The European Union has drafted new rules aimed at regulating AI, which could eventually prevent developers from releasing open-source models on their own.
  • The proposed EU AI Act requires open-source developers to make sure their AI software is accurate, secure, and transparent about risk and data use.
  • Every individual, company, organization, and nation needs a solid understanding of exactly how regulations Act on open-source AI software is needed.

None of the public activities can be taking place unseen in a civilized human world. Each and every activity taking place in public needs to remain under certain legal as well as regulatory frameworks, and Artificial Intelligence is also not an exception.

So, it’s now considered necessary to bring AI under regulations, which may encourage the further development of AI, and meanwhile, manage associated risks with open-source AI software technology such as publicly available datasets, prebuilt algorithms, and ready-to-use interfaces for commercial and non-commercial use under various open-source licenses.

Why does AI, open-source software needs regulations?

Open-source software is developed in a decentralized and collaborative way and relies on peer review and community production. Accessible publicly, anyone can see, modify, and distribute the code as they see fit.

Each aspect of human behavior can appropriately run only under certain norms and regulations. For example, the use of cars must be regulated by law, whether it is used on an individual or commercial basis. Similarly, AI technology which is shaping the human world cannot be managed in an unsupervised way.

But, not for the first time have there been calls for open-source regulation. The software vulnerability known as Log4Shell, discovered in late 2021, focused the minds of enterprises and Governments on how best to manage open-source software. This was followed by calls for government intervention.

In May 2021, the US had already called out the need for a Bill of Materials through the Ordinance on Security of Software. The Bill of Materials approach sets out the code incorporated when open source is used.

It’s obvious that, like any powerful force, AI also requires rules and regulations for its development and use to prevent unnecessary harm through open-source vulnerabilities, basically security risks in open-source software. Weak or vulnerable code of open-source software allows attackers to conduct malicious attacks or perform unintended actions that are not authorized, sometimes, leading to cyberattacks like denial of service (DoS).

Besides the security risks, using open-source software may also have intellectual property issues, lack of warranty, operational insufficiencies, and poor developer practices.

Perhaps, considering the same risks, the European Union has now attempted to introduce new rules, aiming at regulating AI, which could eventually prevent developers from releasing open-source models on their own.

EU draft to regulate open-source AI?

According to Brookings, the proposed EU AI Act, which has not yet been passed into law, requires open-source developers to make sure their AI software is accurate, secure, and transparent about risk and data use in technical documentation.

It argues that a private company would likely try to blame the open-source developers and sue them if it deployed the public model or used it in a product and ended itself in difficulties due to some unexpected or uncontrollable outcomes from the model.

Unfortunately, it would mean that private companies would be in process of developing AI, which would make the open-source community reconsider sharing their code.

Oren Etzioni, the outgoing CEO of the Allen Institute of AI, reckons open-source developers should not be subject to the same stringent rules as software engineers at private companies.

“Open-source developers should not be subject to the same burden as those developing commercial software. It should always be the case that free software can be provided “as is” – consider the case of a single student developing an AI capability; they cannot afford to comply with EU regulations and may be forced not to distribute their software, thereby having a chilling effect on academic progress and on the reproducibility of scientific results,” he told TechCrunch.

Most recent AI-related events

The results for the annual MLPerf inference test, which benchmarks the performance of AI chips from different vendors across numerous tasks in various configurations, have been published this week.

Although an increasing number of vendors are taking part in the MLPerf challenge, regulatory concerns apparently appear to be holding back their participation in the test.

“We only managed to get one vendor, Calxeda, to agree to participate. The rest either declined, rejected the challenge altogether, or thought it might raise privacy concerns,” said Chris Williams, a research associate at Berkeley’s Computer Science Department.

The MLPerf challenge tests AI chips on various tasks at scale using fully instrumented Mark 1.0 hardware and software. The chips run different models and have no knowledge of whether their results were provided by an open-source model or a proprietary one. But vendors who do not agree to participate in the test won’t be able to display their results publicly on ShopTalk forums like this one.

Many netizens have found joy and despair in experimenting with these systems to generate images by typing in text prompts. There are sorts of hacks to adjust the model’s outputs; one of them, known as a “negative prompt, allows users to find the opposite image to the one described in the prompt.

For instance, a digital artist’s famous Twitter thread demonstrates how strange text-to-image models may be beneath the surface.

According to Supercomposite, negative prompts frequently include random photos of AI-generated people. The bizarre behavior is simply another illustration of the bizarre properties these models may possess, which researchers are only now starting to explore.

Recommended: Human Future with Sexist, Racist, and Brilliance Biased AIs

The former engineer Blake Lemoine claimed last week that he thought Google’s LaMDA chatbot was conscious and might have a soul in another event. Sundar Pichai, the CEO of Google, countered the claims by saying, “We are far from it, and we may never get there” but it is undeniable that AI development has progressed more than what we can currently see on the surface.

CEO Pichai himself immediately admitted, “… I think it is the best assistant out there for conversational AI – you still see how broken it is in certain cases”.

Why do AI regulations Act on open-source software right?

Only nature can function without regulatory acts – not humans in public. While the increasing trend of open-source software has already been visible as a threat, not only the EU, but every nation also needs to systematize the design, production, distribution, use, and development of all kinds of software.

AI regulations Act on open-source software is right also because unregulated open-source software is going to have a significant impact on current political, economic, and social systems. With the growing use of open-source AI software, the risks of unintended effects, like massive cyberattacks, individual as well as public data breaches, and misuse of software for malicious purposes like working with or supporting terrorism, etc. can be inevitable.

It’s because cyber and other types of criminals may be looking for flaws in a product to exploit. And if they succeed by cracking open-source AI models, for example, protecting your company’s sensitive data, there could be severe consequences from the loss of reputation and property to a question mark on the social, professional – and in the long-run – the national security as well.

Every individual, company, organization, and nation, therefore, needs a solid understanding of exactly how regulations Act on open-source AI software is a need of this time and an essence of the upcoming future, most of which is likely to be dominated and controlled by technological advances.

sexist racist AI bias Previous post Human Future with Sexist, Racist and Brilliance-Biased AIs
silicon-based life system Next post Silicon chips may replace living things like neurons and cells in AI systems
Show Buttons
Hide Buttons