open source AI software regulation act

Do we now need regulations on Open-Source AI?

Key Points:

  • Unreg­u­lat­ed open-source soft­ware is going to have a sig­nif­i­cant impact on cur­rent polit­i­cal, eco­nom­ic, and social systems.
  • The Euro­pean Union has draft­ed new rules aimed at reg­u­lat­ing AI, which could even­tu­al­ly pre­vent devel­op­ers from releas­ing open-source mod­els on their own.
  • The pro­posed EU AI Act requires open-source devel­op­ers to make sure their AI soft­ware is accu­rate, secure, and trans­par­ent about risk and data use.
  • Every indi­vid­ual, com­pa­ny, orga­ni­za­tion and nation need a sol­id under­stand­ing of exact­ly how reg­u­la­tions Act on open-source AI soft­ware is needed.

None of the pub­lic activ­i­ties can be tak­ing place unseen in a civ­i­lized human world. Each and every activ­i­ty tak­ing place in pub­lic needs to remain under cer­tain legal as well as reg­u­la­to­ry frame­works, and Arti­fi­cial Intel­li­gence is also not an exception. 

So, it’s now con­sid­ered nec­es­sary to bring­ing AI under reg­u­la­tions, which may encour­age the fur­ther devel­op­ment of AI, and mean­while, man­age asso­ci­at­ed risks with open-source AI soft­ware tech­nol­o­gy such as pub­licly avail­able datasets, pre­built algo­rithms, and ready-to-use inter­faces for com­mer­cial and non-com­mer­cial use under var­i­ous open-source licenses.

Why AI, open-source software needs regulations?

Open-source soft­ware is devel­oped in a decen­tral­ized and col­lab­o­ra­tive way, relies on peer review and com­mu­ni­ty pro­duc­tion. Acces­si­ble pub­licly, any­one can see, mod­i­fy, and dis­trib­ute it the code as they see fit.

Each aspect of a human behav­ior can appro­pri­ate­ly run only under cer­tain norms and reg­u­la­tions. For exam­ple, use of cars must be reg­u­lat­ed by law, whether it is used in an indi­vid­ual or com­mer­cial basis. Sim­i­lar­ly, AI tech­nol­o­gy which is shap­ing the human world can­not be man­aged in an unsu­per­vised way.

But, not for the first time have there been calls for open-source reg­u­la­tion. The soft­ware vul­ner­a­bil­i­ty known as Log4Shell, dis­cov­ered in late 2021, focused the minds of enter­pris­es and Gov­ern­ments on how best to man­age open-source soft­ware. This was fol­lowed by calls for gov­ern­ment intervention.

In May 2021, the US had already called out the need for a Bill of Mate­ri­als through the Ordi­nance on Secu­ri­ty of Soft­ware. The Bill of Mate­ri­als approach sets out the code incor­po­rat­ed when open source is used.

It’s obvi­ous that, like any pow­er­ful force, AI also requires rules and reg­u­la­tions for its devel­op­ment and use to pre­vent unnec­es­sary harm through open-source vul­ner­a­bil­i­ties, basi­cal­ly secu­ri­ty risks in open-source soft­ware. Weak or vul­ner­a­ble code of open-source soft­ware allows attack­ers to con­duct mali­cious attacks or per­form unin­tend­ed actions that are not autho­rized, some­times, lead­ing to cyber­at­tacks like denial of ser­vice (DoS).

Besides the secu­ri­ty risks, using open-source soft­ware may also have intel­lec­tu­al prop­er­ty issues, lack of war­ran­ty, oper­a­tional insuf­fi­cien­cies, and poor devel­op­er practices.

Per­haps, con­sid­er­ing the same risks, the Euro­pean Union has now attempt­ed to intro­duce new rules, aim­ing at reg­u­lat­ing AI, which could even­tu­al­ly pre­vent devel­op­ers from releas­ing open-source mod­els on their own.

EU draft to regulate open-source AI?

Accord­ing to Brook­ings, the pro­posed EU AI Act, which has not yet been passed into law, requires open-source devel­op­ers to make sure their AI soft­ware is accu­rate, secure, and trans­par­ent about risk and data use in tech­ni­cal documentation.

It argues that a pri­vate com­pa­ny would like­ly try to blame the open-source devel­op­ers and sue them if it deployed the pub­lic mod­el or used it in a prod­uct and end­ed itself in dif­fi­cul­ties due to some unex­pect­ed or uncon­trol­lable out­comes from the model.

Unfor­tu­nate­ly, it would mean that pri­vate com­pa­nies would be in process of devel­op­ing AI, which would make the open-source com­mu­ni­ty recon­sid­er shar­ing their code.

Oren Etzioni, the out­go­ing CEO of the Allen Insti­tute of AI, reck­ons open-source devel­op­ers should not be sub­ject to the same strin­gent rules as soft­ware engi­neers at pri­vate companies.

“Open-source devel­op­ers should not be sub­ject to the same bur­den as those devel­op­ing com­mer­cial soft­ware. It should always be the case that free soft­ware can be pro­vid­ed ‘as is’ — con­sid­er the case of a sin­gle stu­dent devel­op­ing an AI capa­bil­i­ty; they can­not afford to com­ply with EU reg­u­la­tions and may be forced not to dis­trib­ute their soft­ware, there­by hav­ing a chill­ing effect on aca­d­e­m­ic progress and on repro­ducibil­i­ty of sci­en­tif­ic results,” he told TechCrunch.

Most recent AI-related events

The results for the annu­al MLPerf infer­ence test, which bench­marks the per­for­mance of AI chips from dif­fer­ent ven­dors across numer­ous tasks in var­i­ous con­fig­u­ra­tions, has been pub­lished this week.

Although an increas­ing num­ber of ven­dors are tak­ing part in the MLPerf chal­lenge, reg­u­la­to­ry con­cerns appar­ent­ly appear to be hold­ing back their par­tic­i­pa­tion in the test.

“We only man­aged to get one ven­dor, Calxe­da, to agree to par­tic­i­pate. The rest either declined, reject­ed the chal­lenge alto­geth­er, or thought it might raise pri­va­cy con­cerns,” said Chris Williams, a research asso­ciate at Berke­ley’s Com­put­er Sci­ence Department.

The MLPerf chal­lenge tests AI chips on var­i­ous tasks at scale using ful­ly instru­ment­ed Mark 1.0 hard­ware and soft­ware. The chips run dif­fer­ent mod­els and have no knowl­edge of whether their results were pro­vid­ed by an open-source mod­el or a pro­pri­etary one. But ven­dors who do not agree to par­tic­i­pate in the test won’t be able to dis­play their results pub­licly on ShopTalk forums like this one.

Many neti­zens have found joy and despair in exper­i­ment­ing with these sys­tems to gen­er­ate images by typ­ing in text prompts. There are sorts of hacks to adjust the mod­el’s out­puts; one of them, known as a “neg­a­tive prompt, allows users to find the oppo­site image to the one described in the prompt.

For instance, a dig­i­tal artist’s famous Twit­ter thread demon­strates how strange text-to-image mod­els may be beneath the surface.

Accord­ing to Super­com­pos­ite, neg­a­tive prompts fre­quent­ly include ran­dom pho­tos of AI-gen­er­at­ed peo­ple. The bizarre behav­ior is sim­ply anoth­er illus­tra­tion of the bizarre prop­er­ties these mod­els may pos­sess, which researchers are only now start­ing to explore.

Recommended: Human Future with Sexist, Racist and Brilliance Biased AIs

The for­mer engi­neer Blake Lemoine claimed last week that he thought Google’s LaM­DA chat­bot was con­scious and might have a soul in anoth­er event. Sun­dar Pichai, the CEO of Google, coun­tered the claims by say­ing, “We are far from it, and we may nev­er get there” but it is unde­ni­able that AI devel­op­ment has pro­gressed more than what we can cur­rent­ly see on the sur­face.

CEO Pichai him­self imme­di­ate­ly admit­ted, “… I think it is the best assis­tant out there for con­ver­sa­tion­al AI — you still see how bro­ken it is in cer­tain cases”.

Why AI regulations Act on open-source software is right?

Only nature can func­tion with­out reg­u­la­to­ry acts — not humans in pub­lic. While the increas­ing trend of open-source soft­ware has already been vis­i­ble as a threat, not only EU, but every nation also needs to sys­tem­atize the design, pro­duc­tion, dis­tri­b­u­tion, use and devel­op­ment of all kinds of software.

AI reg­u­la­tions Act on open-source soft­ware is right also because unreg­u­lat­ed open-source soft­ware is going to have a sig­nif­i­cant impact on cur­rent polit­i­cal, eco­nom­ic, and social sys­tems. With the grow­ing use of open-source AI soft­ware, the risks of unin­tend­ed effects, like mas­sive cyber­at­tacks, indi­vid­ual as well as pub­lic data breach­es, mis­use of soft­ware in mali­cious pur­pose like work­ing with or sup­port­ing to ter­ror­ism etc. can be inevitable.

It’s because cyber and oth­er types of crim­i­nals may be look­ing for flaws in a prod­uct to exploit. And if they suc­ceed by crack­ing open-source AI mod­els, for exam­ple, pro­tect­ing your company’s sen­si­tive data, there could be severe con­se­quences from loss of rep­u­ta­tion and prop­er­ty to a ques­tion mark on the social, pro­fes­sion­al — and in the long-run — the nation­al secu­ri­ty as well.

Every indi­vid­ual, com­pa­ny, orga­ni­za­tion and nation, there­fore, needs a sol­id under­stand­ing of exact­ly how reg­u­la­tions Act on open-source AI soft­ware is a need of this time and an essence of the upcom­ing future, most of which is like­ly to be dom­i­nat­ed and con­trolled by tech­no­log­i­cal advances.

Leave a Reply

Your email address will not be published.

Join our NewsletterDaily Glimple of Future

Our blog, "Daily Glimpse of Future", strives to make the future much clearer than it is today. Join our newsletter for free now.