Setting Up Content Moderation Teams

A content moderation team is a team of reviewers who are trained on the policies of a given platform or a product. They are responsible for manually reviewing user-generated content against said policies, determining whether the content is policy violating or not, and taking an action accordingly. 

Although automated solutions for content moderation are evolving with the help of artificial intelligence (AI), there will always be some types of decisions that need human review given the complex nature of content (see Automated Systems and Artificial Intelligence). For example, an AI may be effective at automatically detecting nudity but certain types of hate speech may be harder to automatically action. Having the right content moderation team is not only critical for ensuring high precision in enforcement, but also for ensuring machine learning (ML) algorithms are trained on the latest data. 

In this section, we discuss the commonly used capacity models for setting up a content moderation team and key considerations for choosing the appropriate capacity model.

Commonly Used Capacity and Workforce Models

External Support

Platforms may engage external vendors to manage long-term and scaled review processes with well defined standard operating procedures. This model includes multiple vendor employees who work off-site (i.e., not on a work site controlled by the platform), and is easy to scale as industry experts manage day to day operations, facilities, and people management. It needs structured communication channels with outsourced partners to discuss performance (e.g., quality and other operational metrics) and process improvements insights, as the platform will have no hand in identifying or hiring vendors’ employees who will moderate content and as their day to day work is overseen by vendors themselves. However, in this model, platforms could conduct frequent need based training for vendors to empower them to maintain or improve their performance.

Another model of engaging external support is to engage in house contractors or Managed Service Providers for temporary and evolving workflows, which is often used for the incubation of new processes before scaling up. This model provides medium control and visibility on the review process and requires some intervention in operations and facilities management. The contractor model also has a time limit (e.g., 6 month contract, 12 month contract) and is usually meant for short-term workflows. Depending on the scale of the organization, the platforms’ employees could be a part of the hiring process.

Accurately classifying worker types and establishing capacity models is critical when engaging external support so as to avoid legal, financial, and/or reputation implications.

Article: Independent Contractor v. Employee: How Uber Shows the Risk of Misclassification

Article: Temp vs SOW: Why Contingent Workforce Classifications Are Essential

Employees

Regional or subject matter experts: Platforms often have a team of employees, either experts of a specific market or investigation specialists, to investigate high-risk and high-priority content, individuals, or escalations that may involve follow ups with law enforcement. Some platforms have a more frequent need to address such content, however, the volume of content that needs to be investigated in each scenario is generally lower (i.e., in tens or hundreds, vs. the thousands that get reported by users through in-app reporting flows or detected proactively by platforms). This model provides full control and visibility over the review process and often provides access to information and context for decision making, but has scaling limitations for companies that prefer to stay lean.

Risk management: Specialized review teams of generalists may be used for monitoring and enforcement of high risk and priority escalations. These teams may also support a thorough root cause analysis for escalations and influence process improvements for future risk prevention and mitigation. 

Hybrid models

Platforms or other online services also use hybrid capacity models where volunteers from the user community moderate content. In this model, high level policies regarding what is and isn’t allowed could be set by the platform or by the users themselves (e.g., Reddit, Wikipedia). Community volunteers have the option to further define granular guidance or localized policies (see Policy Models) and moderate content accordingly. While some platforms may rely heavily on their user community to moderate content, they may also engage employees to oversee community volunteers.

This model is likely one of the least expensive ones in terms of financial cost, and possibly the most inclusive model as users make and/or be involved in moderation decisions. However, it has its own challenges. For example, companies might still have the operational lift of managing a volunteer community, and it’s unrealistic to expect high levels of consistency in content moderation decisions as different groups or sections of users may take varying approaches and may not receive training. Additionally, this model relies on unpaid labor, often lacks support for moderators in terms of their psychological health and well-being, and risks volunteer burnout.

Article: Reddit: How do I become a moderator?

Article: How Content Moderation and Anti-Vandalism Works on Wikipedia

Key Considerations When Choosing A Capacity Model

Trust and safety teams consider various factors when choosing the appropriate capacity model, depending on the scale and type of content relevant to their platforms.

Scope

Trust and safety teams must often consider the scope of content moderation when choosing a capacity model. Scope includes factors such as purpose, internationalization, duration, investigation needs, and policy maturity and longevity. 

The primary step in determining scope is determining purpose (i.e., what is a given content moderation workflow intended to do?). Examples may include crisis response, measurement, classifier training, or scaled enforcement in response to user reports.

When considering scope, teams must also consider internationalization: whether a given workflow needs coverage in only one or two languages (or markets), or if it needs global coverage. Deciding on the degree of internationalization required will shape whether, where, and how different language and/or market experts will need to be engaged for content moderation.

Duration is also an important factor when determining scope. For example, will the workflow be short-term or long-term? If it’s a short-term workflow, one may engage external vendors on a contract lasting a few months, whereas a long-term workflow may need employees or vendors who can support the workflow for the required duration.

Teams must also consider investigation needs. For example, will a given workflow involve private content such as message review, content with specific privacy settings, or publicly available content? Can decisions be made based on a fixed set of information, or would there be a need for flexibility and autonomy in investigation? Depending on the answer to these questions, there may also be privacy and legal requirements as to which capacity model can or cannot manage the workflow. For example, external and off-site vendors may be able to review and enforce specific user reported messages, but may not have access to all the information to thoroughly investigate the inbox and behavior of a potential child sex offender.

Finally, policy maturity and longevity also shapes scope considerations. For example, is a given policy long-standing and mature, or does it involve complex emerging trends and is constantly evolving? As per the discussion above, if it is a new policy that will need regular changes and frequent training to content moderators, it may be best suited for in-house contractors to manage enforcement during the incubation period, providing more autonomy and agility to manage policy updates.

Supply and Demand

When choosing a capacity model, trust and safety teams must also consider supply and demand, including the following factors:

  • Turnaround time: Does the content need to be reviewed and closed within a given number of hours, or are turnaround times flexible? 
  • Scale: What is the volume of content expected to be handled by this workforce (e.g., would the number of photos, videos, accounts etc., to be reviewed be tens/week or thousands/week)? This can be understood based on historical volume in a similar space and planned strategic investments in a certain area of business (e.g., proactively detecting violating content). Scale will determine which of the above capacity models will best suit the requirement, as the fewer internal employees will most likely be unable to handle moderation for thousands of items of content per week.
  • Average review time: Does the type of content require high review time for investigation?
  • Coverage: Does the workflow need to be covered 24×7 or 24×5, or can it be ad-hoc depending on moderator availability?

Cost and Efficiency

Apart from the above factors, trust and safety teams also consider cost and efficiency while choosing a capacity model. For example, engaging tens or hundreds of in-house contractors who have to be managed by employees might be appropriate for short-term workflows (i.e., experiments, policy incubation) that last one to two years, or for long-term workflows which deal with high-severe harm types such as child abuse where investigations are in-depth, time-consuming and sensitive. However, for sustained and scaled enforcement involving millions of content, it might be more efficient to outsource the process of manual content moderation. Especially when platforms take an Industrial-Centralized approach to policy enforcement (see Creating and Enforcing Policy), it will require thousands of content moderators for enforcement, apart from automating where possible.