Tech

This new way of training AI can help reduce online harassment

For about 6 people In the last few months, Nina Nørgaard met seven people an hour each week to talk about sexist and violent language used to target women on social media. Nørgaard, a PhD candidate at the IT University of Copenhagen, and her discussion group participated in a rare effort to better identify misogyny online. Researchers paid seven people to look at thousands of posts on Facebook, Reddit, and Twitter to determine if they proved sexism, stereotypes, or harassment. Once a week, researchers organized a group with Nørgaard as an intermediary to discuss the harsh calls they opposed.

Misogyny is a tragedy that shapes how women are represented online. A 2020 Plan International survey found that it was one of the largest surveys ever conducted, with more than half of women in 22 countries being harassed and abused online. One in five women who encountered abuse said they changed their behavior (reduced or stopped using the Internet) as a result.

Social media companies use artificial intelligence to identify and remove posts that insult, harass, or intimidate women, but that’s a difficult problem. There are no criteria among researchers to identify sexist or misogynist posts. One recent treatise suggested four categories of nasty content, and another treatise identified 23 categories. Most studies are conducted in English, and people working in other languages ​​and cultures have even less guides for difficult and often subjective decisions.

So instead of relying on part-time contractors, who are often paid by posts, Danish researchers hire Nørgaard and seven full-time employees to review and label posts. I tried. They deliberately selected people of different ages and nationalities with different political views in order to reduce the possibility of prejudice from a single worldview. Labelers included software designers, climate change activists, actresses, and healthcare professionals. Nørgaard’s job was to lead them to an agreement.

“The great thing is that they don’t agree. You don’t need tunnel vision. I don’t want everyone to think the same way,” says Nørgaard. Her goal was “to get them to talk between themselves or between groups,” she says.

Nørgaard considered her job to help labelers “find their own answers.” Over time, she knew each of the seven as an individual and, for example, spoke more than the others. She sought to make sure that no individual controlled the conversation. Because it was intended to be a debate, not a debate.

The toughest calls included sarcastic, jokes, and sarcastic posts. They became a big topic in conversation. But over time, “I thought it was a good thing because the meetings got shorter and people didn’t discuss much,” says Nørgaard.

The researchers behind the project call it success. They say the conversation led to more accurately labeled data for training AI algorithms. Researchers say that data-tuned AI can recognize misogyny on popular social media platforms with an 85% chance. A year ago, state-of-the-art misogyny detection algorithms were about 75% accurate. In total, the team reviewed nearly 30,000 posts, of which 7,500 were considered abusive.

The post is written in Danish, but researchers say their approach can be applied to any language. “When annotating misogyny, I think we need to follow an approach that has at least most of our elements, otherwise we endanger poor quality data and spoil everything.” Said Leon Derczynski, co-author of the study and associate professor at the IT University of Copenhagen.

The findings can be useful beyond social media. Companies are using AI to sort out job listings and publicly-faced texts, such as sexist press releases. If a woman excludes herself from online conversations to avoid harassment, it curbs the democratic process.

“If you’re going to turn a blind eye to the threat and aggression against half the population, you probably don’t have the democratic online space you can have,” Derczynski said.

This new way of training AI can help reduce online harassment

Source link This new way of training AI can help reduce online harassment

Back to top button