There are fundamental issues and areas in which all trust and safety teams engage. These chapters describe these various areas, discuss how companies generally approach these issues, and explore the considerations that many trust and safety professionals take into account.
In this chapter, we trace this evolution, starting with a nascent field focused on catching and removing spammers and trolls to a complex global infrastructure involving moderating trillions of bits of data, managing millions of users, and hiring thousands of trust and safety practitioners. We explore key elements that make up trust and safety as well as different approaches and their tradeoffs. We also discuss the importance of where a trust and safety team fits within an organization.
Creating and Enforcing Policy
Developing policies to prevent, address, and remediate abusive behavior on digital platforms is a core practice within the field of Trust and Safety. This chapter covers how policy are developed, and the different approaches companies take when developing policies. It describes the different types of abusive behavior that violate policies and the different methods companies use to enforce their policies. The chapter concludes with a review of regional differences and regulatory issues that companies often take into consideration when developing or modifying policies.
Content Moderation and Operations
Content moderation is the process of reviewing online user-generated content for compliance against a digital platform’s policies regarding what is allowed to be shared on their platform versus what is not. The process of moderating content and enforcing policy is either done manually by people or through automation, or a combination of both, depending on the scale and maturity of the abuse and of a platform’s operations. This chapter focuses on different approaches to setting up content moderation teams, how to ensure they’re successful, user appeals, and relevant metrics.
A transparency report is a document released by an internet company that discloses key metrics and information about digital governance and enforcement measures on its platform(s). This chapter covers the history of transparency reports, the types of transparency reports companies may produce, and the challenges and opportunities for companies when developing transparency reports.
Automated Systems and Artificial Intelligence
This chapter unpacks how trust and safety teams build, test, and deploy technologies used for automation, describes common forms of automation, explores challenges associated with developing and deploying automation techniques, and discusses key considerations and limitations of the use of automation. Because many tools—particularly the more sophisticated models designed to spot policy-violating content—rely on AI, this chapter also discusses potential biases in AI models.
Trust & Safety and Law Enforcement
In this chapter, we discuss how law enforcement works, how to process and respond to incoming legal process and emergency requests while distinguishing between reasonable and potentially overreaching law enforcement requests, how your organization may choose to make proactive law enforcement referrals, teams and roles associated with law enforcement response, and the opportunities and challenges T&S professionals may face when working with law enforcement.