The human side of online content moderation

Elizabeth Meyer, ESG Investment Analyst

Online platforms are not only changing the way the world communicates, but also flooding the internet with new content. In 2017, 3.3 million posts were made on Facebook and 500 hours of video were uploaded to YouTube – every 60 seconds. [1] Compared to 2014, these numbers have grown 25% and 40% respectively. [2]

The responsibility that online communications providers take for the user-generated content that appears on their sites is facing increasing scrutiny. Facebook, YouTube and Twitter in particular have received significant media coverage for hate speech and extreme and/or violent content. Some commentators have even gone as far as to accuse the companies of complicity in human rights abuses. In response, firms have established community standards that they expect their users to uphold. They exercise the right to remove content that does not meet their standards. This includes restricting users who repeatedly post inappropriate content.

 

However, recent events, including the tragic shooting in Christchurch, New Zealand, that was livestreamed on Facebook, have escalated calls for more meaningful action and regulation. In May 2019, New Zealand’s prime minister and the president of France hosted a global summit on the issue, launching the Christchurch Call. This sets out key principles for governments and companies to tackle extreme and violent content online. Seventeen countries, the European Commission and eight companies have signed up so far. They have committed to working together to develop effective interventions and law enforcement. This includes rigorously enforcing community standards and providing greater transparency, among others.

 

With content moderation set to increase, more resources will be required. Ensuring user compliance with companies’ standards of appropriate content relies on both technology and humans. Artificial intelligence and algorithms can help to identify images, words or phrases, but are limited in their ability to interpret context. As such, human intervention is still required to review posts and decide whether content is appropriate. With demand for content moderation growing quickly, costs are likely to increase for social media platforms. And the market opportunity for those providing content moderation services and technology is significant.

What is a content moderator?

Companies review content on their sites both reactively and proactively: other users can flag content to be reviewed and the company’s own algorithms can identify questionable content. Content moderators are people specifically employed to review this content and decide whether to delete it from the site or allow it to remain. In practice, much of the content flagged may be automatically removed. However in many instances, context plays a significant role in determining the appropriate action.

 

The collective content moderation industry is expected to grow significantly as user-generated content continues to increase. In 2017, Facebook had 7,500 content moderators and in that year committed to adding 3,000 more.[3] By the end of 2018, it increased this to 30,000 safety and security staff, including a mix of full-time employees and contractors.[4] However, little disclosure is available how content moderation processes work in practice.

What is ‘inappropriate’ content?

There are no clear-cut, industry-recognised standards for what constitutes inappropriate online content. The US Communications Decency Act specifically states that online platforms may remove “material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected” without facing legal liability.

 

Which content actually meets these standards is open to interpretation. Companies are taking their own views on what is and is not appropriate, without always being clear about how they do so. For example, until March 2018, Facebook refused to publish its community standards, claiming that it would allow users to find ways around them. The now-published community standardsexplain how Facebook defines banned content such as “credible violence”, “bullying,” “hate speech” and “dangerous individuals and organisations” (among others). However, in some instances, nuanced interpretation is still required.

Support for content moderators

Moderating online content can be extremely unpleasant – and potentially traumatic. However, most companies provide little detail on the psychological support given to staff to deal with what they are seeing and hearing. Former employees have raised lawsuits against Microsoft and Facebook, alleging that their work as content moderators caused post-traumatic stress disorder. They also argued that the companies should have done more to protect and support them.

 

No third-party monitors of the content moderation industry are in place. Employees are also typically barred from discussing their work via nondisclosure agreements. This makes guidance and best practices difficult to identify. However, advice on how to support content moderators may be found from other types of agencies that work with distressing content and trauma victims. This includes the Internet Watch Foundation and the US National Centre for Missing and Exploited Children, for example. Support can include specific training and resilience programs, with elements provided by an external professional trained in trauma intervention. Transparent and robust hiring practices that explain the impact and risks are also essential.

The risks and opportunities for companies and investors

The market opportunity for content moderation is growing significantly. User-generated content is increasing exponentially and most social media companies have announced that they are significantly scaling up the number of content moderators used.

 

The increasing demand for content moderation brings substantial opportunities for companies that offer these services. For example, TaskUs a private US company originally formed as a virtual assistant for internet companies is now one of the main providers of content moderation globally. The company has attracted significant investment from private market firms such as Blackstone Group. TaskUS is estimated to be worth $500 million, from a starting point of $20,000 invested 10 years ago.[5] According to CEO Bryce Maddock, TaskUs provides content moderation to “almost every major social network and it’s the fastest growing part of our business today”.[6]

 

However, media coverage of the disturbing nature of this work is increasing, raising questions about support for employees and effectiveness of moderation strategies. Both of these criticisms are financially material for companies and investors. Businesses employing human content moderators should invest in the appropriate recruitment, training and psychological support for these employees. The financial and reputational implications of not doing so can be severe, including high staff costs, low productivity and potential employee litigation.

Final thoughts…

Determining whether content is appropriate for public consumption is highly subjective and will continue to be debated in public arenas. However, companies facing the day-to-day realities of the exponential growth in user-generated content have been compelled to act now, creating their own standards and putting tens of thousands of content moderators to work globally.

 

Company disclosures on the scope and nature of content moderation practices are poor. That said, the risks and opportunities for companies undertaking these activities are significant. Developing resilient content moderators has clear benefits for companies. Demand for content moderators is increasing globally. Well-qualified, resilient moderators can help companies capitalise on significant commercial opportunities to meet this demand.

 

 

[1] Source: https://www.smartinsights.com/internet-marketing-statistics/happens-online-60-seconds/

[2] Source: https://www.smartinsights.com/internet-marketing-statistics/happens-online-60-seconds/

[3] Source: https://www.wsj.com/articles/zuckerberg-says-facebook-will-add-3-000-people-to-review-content-after-violent-posts-1493822842

[4] Source: https://newsroom.fb.com/news/2018/07/hard-questions-content-reviewers/

[5] Source: https://techcrunch.com/2018/08/08/ai-training-and-social-network-content-moderation-services-bring-taskus-a-250-million-windfall/

[6] Source: https://techcrunch.com/2018/08/08/ai-training-and-social-network-content-moderation-services-bring-taskus-a-250-million-windfall/

RISK WARNING
The value of investments, and the income from them, can go down as well as up and you may get back less than the amount invested.

The views and conclusions expressed in this communication are for general interest only and should not be taken as investment advice or as an invitation to purchase or sell any specific security.

Any data contained herein which is attributed to a third party ("Third Party Data") is the property of (a) third party supplier(s) (the "Owner") and is licensed for use by Standard Life Aberdeen**. Third Party Data may not be copied or distributed. Third Party Data is provided "as is" and is not warranted to be accurate, complete or timely.

To the extent permitted by applicable law, none of the Owner, Standard Life Aberdeen** or any other third party (including any third party involved in providing and/or compiling Third Party Data) shall have any liability for Third Party Data or for any use made of Third Party Data. Past performance is no guarantee of future results. Neither the Owner nor any other third party sponsors, endorses or promotes the fund or product to which Third Party Data relates.

**Standard Life Aberdeen means the relevant member of Standard Life Aberdeen group, being Standard Life Aberdeen plc together with its subsidiaries, subsidiary undertakings and associated companies (whether direct or indirect) from time to time.

Risk warning

Risk Warning

The value of investments, and the income from them, can go down as well as up and you may get back less than the amount invested.