trust AI

AI is earning trust as much as humans to flag hate speech

The year 2022 has already wit­nessed a rapid growth in the evo­lu­tion and use of arti­fi­cial intel­li­gence (AI). A recent research has shown that peo­ple could trust AI as much as human edi­tors to flag hate speech and harm­ful content.

Accord­ing to the researchers at Penn State, when users think about pos­i­tive attrib­ut­es of machines, like their accu­ra­cy and objec­tiv­i­ty, they show more faith in AI. How­ev­er, if users are remind­ed about the inabil­i­ty of machines to make sub­jec­tive deci­sions, their trust is lower.

S. Shyam Sun­dar, James P. Jimir­ro Pro­fes­sor of Media Effects in the Don­ald P. Bel­lis­ario Col­lege of Com­mu­ni­ca­tions and co-direc­tor of the Media Effects Research Lab­o­ra­to­ry, states that the find­ings may help devel­op­ers design bet­ter AI-pow­ered con­tent cura­tion sys­tems that can han­dle the large amounts of infor­ma­tion cur­rent­ly being gen­er­at­ed while avoid­ing the per­cep­tion that the mate­r­i­al has been cen­sored or inac­cu­rate­ly classified.

How­ev­er, the researchers warn that both human and AI edi­tors have advan­tages and dis­ad­van­tages. Humans tend to more accu­rate­ly assess whether con­tent is harm­ful, such as when it is racist or poten­tial­ly self-harm­ing. Peo­ple, how­ev­er, are unable to process the large amounts of con­tent that are now being gen­er­at­ed and shared online.

On the oth­er hand, while AI edi­tors can swift­ly ana­lyze con­tent, peo­ple often dis­trust these algo­rithms to make accu­rate rec­om­men­da­tions, and fear that the infor­ma­tion could be censored.

Bring­ing peo­ple and AI togeth­er in the mod­er­a­tion process, as they say, may be one way to build a trust­ed mod­er­a­tion sys­tem. She added that trans­paren­cy — or sig­nal­ing to users that a machine is involved in mod­er­a­tion — is one approach to improv­ing trust in AI. How­ev­er, allow­ing users to offer sug­ges­tions to the AIs, which the researchers refer to as “inter­ac­tive trans­paren­cy,” seems to boost user trust even more.

Recommended: Building a model of an Artificial brain

To study trans­paren­cy and inter­ac­tive trans­paren­cy, among oth­er vari­ables, the researchers recruit­ed 676 par­tic­i­pants to inter­act with a con­tent clas­si­fi­ca­tion system.

The research showed, among oth­er things, that whether an AI con­tent mod­er­a­tor invokes pos­i­tive attrib­ut­es of machines, like their accu­ra­cy and objec­tiv­i­ty, or neg­a­tive attrib­ut­es, like their inabil­i­ty to make sub­jec­tive judg­ments about sub­tle nuances in human lan­guage, will deter­mine whether users would then trust them.

Giv­ing users the abil­i­ty to par­tic­i­pate in on whether or not inter­net mate­r­i­al is harm­ful may improve user trust. The study par­tic­i­pants who sub­mit­ted their own terms to an AI-select­ed list of phras­es used to cat­e­go­rize posts trust­ed the AI edi­tor just as much as they trust­ed a human edi­tor, accord­ing to the researchers.

Accord­ing to Sun­dar, reliev­ing humans from the job of con­tent review goes beyond sim­ply pro­vid­ing work­ers with a break from a tire­some task. He said that by using human edi­tors, these work­ers would be exposed to hours of vio­lent and hate­ful content.

“There’s an eth­i­cal need for auto­mat­ed con­tent mod­er­a­tion,” said Sun­dar, who is also direc­tor of Penn State’s Cen­ter for Social­ly Respon­si­ble Arti­fi­cial Intel­li­gence. “There’s a need to pro­tect human con­tent moderators—who are per­form­ing a social ben­e­fit when they do this—from con­stant expo­sure to harm­ful con­tent day in and day out.”

Future work could look at how to help peo­ple not just trust AI but also under­stand it. Inter­ac­tive trans­paren­cy may be a key part of under­stand­ing AI, too, she added, accord­ing to the researchers.

Note: this mate­r­i­al has been edit­ed for length and con­tent. For fur­ther infor­ma­tion, please vis­it the cit­ed source.

Leave a Reply

Your email address will not be published.

Join our NewsletterDaily Glimple of Future

Our blog, "Daily Glimpse of Future", strives to make the future much clearer than it is today. Join our newsletter for free now.