Inicio Tecnología Moderating content doesn’t have to be so traumatic

Moderating content doesn’t have to be so traumatic

18
0
Facebook
Twitter
Google+
Pinterest

Lasting trauma doesn’t necessarily have to be in the job description for workers who watch graphic or violent videos as part of their jobs. Content moderators, journalists, and activists often have to wade through horrific images, videos and text to do their work, a task that can imperil their mental health and lead to post-traumatic stress disorder. Luckily, experts say there are ways to minimize the harms of so-called vicarious or secondary trauma.

Post-traumatic stress disorder (PTSD) is caused by experiencing or witnessing a terrifying incident, and the symptoms include acute anxiety, flashbacks, and intrusive thoughts. Though most people think about PTSD in the context of war or being physically involved with a crisis, in recent years there has been growing acceptance that repeatedly viewing traumatic events could cause the condition as well. Pam Ramsden, a psychologist at the University of Bradford in the United Kingdom, presented research at a 2015 British Psychological Association conference that found that a quarter of people who watched distressing images of violent events developed symptoms of PTSD.

Meanwhile, research from Roxane Cohen Silver, a psychologist at University of California Irvine showed that more than six hours of exposure to coverage of the Boston Marathon bombings (where exposure can be through any form of media) in the four weeks after the attack was linked to more stress than having actually been there. Even the the latest version of the Diagnostic and Statistical Manual, the bible of psychiatric diagnosis in the US, acknowledges that PTSD can occur when viewing graphic imagery is necessary for work. For example, Facebook content moderators at a company in Arizona are experiencing severe mental health issues related to constantly looking at graphic images, as an investigation from The Verge this week revealed.

This doesn’t mean that everyone who sees this imagery will be traumatized, and some people do seek out traumatic content without being affected. “Not even one hundred percent of people who go to war get PTSD, so there are differential risk factors,” says Cohen. “Certainly there are ways of mitigating the stress and taking breaks and not looking at something for eight hours a day without a break.”

Though there’s little research in this area, the Dart Center, which works to support journalists who cover violence, has created two tipsheets for best practices when working with traumatic imagery. Though some of the tips can be done by a moderator — such as making the image window smaller, taking notes to minimize the need to go back and forth repeatedly over footage, and having “distraction files” of cute puppies to look at — many are only in the power of managers to implement.

“There have to be lots of opportunities to have breaks and mix up tasks, and offices where people can focus on something beautiful,” says Elana Newman, who is research director at the Dart Center and a psychologist at the University of Tulsa. “They need to routinely screen their staff for mental-health problems and provide those services.” Newman and other experts agree that, ultimately, the onus must be on the company itself to make these changes to protect workers.

Sam Dubberley is director of Amnesty International’s Digital Verification Corps, a group of volunteers who need to confirm whether digital images are real and, as a result, frequently look at traumatic imagery. (Dubberley has also worked with the Dart Center.) “I’m a very strong believer that change has to be top-down,” he says. Dubberly has conducted his own research into what he calls “the drip drip drip and constant misery” of watching graphic images online and, through interviews with people on these “digital frontlines,” created a report with more suggestions for people at every level of an organization.

A healthier environment, according to his report, could mean moderators using mindfulness tools and frequently checking in on their own pace of work and learning to focus on images that make them feel safe. Perhaps more importantly, it means incorporating trauma awareness training for everyone, which means that all hires need to be briefed on the fact that disturbing graphics will be part of the job, and remembering that traumatic triggers differ from person to person. It also means developing a culture where mental health is as important as physical health, and having managers talking both one-on-one and in groups with the staff about how they are coping.

Though there are growing efforts to prevent secondary trauma in journalism and human-rights organizations, the picture is more complicated when it comes to content moderation. Awareness of these harms has greatly increased in the past two years and social media companies definitely know about the problem, says Sarah T. Roberts, a professor of information studies at UCLA. For instance, YouTube CEO Susan Wojcicki said at SXSW last year that the company would limit moderators to working four hours a day.

At the same time, these companies respond to all forms of controversy — whether around “fake news” or controversial advertisements — by promising to hire more moderators and improve gatekeeping, which puts more people into these roles even though we still don’t have detailed research on how to protect them. The resources that the Dart Center and Dubberley have created are general guidelines pulled from interviews, not the result of rigorous, longitudinal studies on content moderation and secondary trauma. We don’t have those studies because conducting them requires access that these companies likely won’t allow.

“My immediate response to [the YouTube news] was ‘okay, this means you’re going to have to double your workforce,’ and we have no evidence that four hours a day is the magic number,” Roberts says. “How do we know that four hours a day is feasible, and 4.5 hours takes you into a psychological crisis?”

Finally, the business model of content moderation makes it hard to implement even the common-sense suggestions. Because the commercial content moderation ecosystem is global and often contract-based, firms are frequently worried about losing a contract to a company that can prove it’s more efficient. “There’s these countervailing forces,” Roberts explains. “On the one hand, we want worker wellness and resilience for sure, and on the other hand is a bottom line metric around productivity and when a person is on a ‘wellness break’ they’re not out on the floor doing these activities.”

Ultimately, change needs to happen at multiple levels — not just distraction files or relying on artificial intelligence, but changing the culture of an organization and taking a closer look at business models. Facebook has pledged to improve oversight of its contractor firms, but protecting content moderators will take an industry-wide effort.

What do you think?

This site uses Akismet to reduce spam. Learn how your comment data is processed.