The year 2022 has already witnessed rapid growth in the evolution and use of artificial intelligence (AI). Recent research has shown that people could trust AI as much as human editors to flag hate speech and harmful content.
According to the researchers at Penn State, when users think about positive attributes of machines, like their accuracy and objectivity, they show more faith in AI. However, if users are reminded about the inability of machines to make subjective decisions, their trust is lower.
S. Shyam Sundar, James P. Jimirro Professor of Media Effects in the Donald P. Bellisario College of Communications and co-director of the Media Effects Research Laboratory, state that the findings may help developers design better AI-powered content curation systems that can handle the large amounts of information currently being generated while avoiding the perception that the material has been censored or inaccurately classified.
However, the researchers warn that both human and AI editors have advantages and disadvantages. Humans tend to more accurately assess whether the content is harmful, such as when it is racist or potentially self-harming. People, however, are unable to process the large amounts of content that are now being generated and shared online.
On the other hand, while AI editors can swiftly analyze content, people often distrust these algorithms to make accurate recommendations and fear that the information could be censored.
Bringing people and AI together in the moderation process, as they say, maybe one way to build a trusted moderation system. She added that transparency — or signaling to users that a machine is involved in moderation — is one approach to improving trust in AI. However, allowing users to offer suggestions to the AIs, which the researchers refer to as “interactive transparency,” seems to boost user trust even more.
Recommended: Building a model of an Artificial brain
To study transparency and interactive transparency, among other variables, the researchers recruited 676 participants to interact with a content classification system.
The research showed, among other things, that whether an AI content moderator invokes positive attributes of machines, like their accuracy and objectivity, or negative attributes, like their inability to make subjective judgments about subtle nuances in human language, will determine whether users would then trust them.
Giving users the ability to participate in whether or not internet material is harmful may improve user trust. The study participants who submitted their own terms to an AI-selected list of phrases used to categorize posts trusted the AI editor just as much as they trusted a human editor, according to the researchers.
According to Sundar, relieving humans from the job of content review goes beyond simply providing workers with a break from a tiresome task. He said that by using human editors, these workers would be exposed to hours of violent and hateful content.
“There’s an ethical need for automated content moderation,” said Sundar, who is also director of Penn State’s Center for Socially Responsible Artificial Intelligence. “There’s a need to protect human content moderators—who are performing a social benefit when they do this—from constant exposure to harmful content day in and day out.”
Future work could look at how to help people not just trust AI but also understand it. Interactive transparency may be a key part of understanding AI, too, she added, according to the researchers.
Note: this material has been edited for length and content. For further information, please visit the cited source.
- Does our brain run binary codes, hosts universes? - October 15, 2022
- Eye-lens software to give you a “super-human” vision? - September 28, 2022
- An infant’s brain activities predict future abilities - September 24, 2022