There are fundamental issues and areas in which all trust and safety teams engage. These chapters describe these various areas, discuss how companies generally approach these issues, and explore the considerations that many trust and safety professionals take into account.
In this chapter, we trace this evolution, starting with a nascent field focused on catching and removing spammers and trolls to a complex global infrastructure involving moderating trillions of bits of data, managing millions of users, and hiring thousands of trust and safety practitioners. We explore key elements that make up trust and safety as well as different approaches and their tradeoffs. We also discuss the importance of where a trust and safety team fits within an organization.
Creating and Enforcing Policy
Developing policies to prevent, address, and remediate abusive behavior on digital platforms is a core practice within the field of Trust and Safety. This chapter covers how policy are developed, and the different approaches companies take when developing policies. It describes the different types of abusive behavior that violate policies and the different methods companies use to enforce their policies. The chapter concludes with a review of regional differences and regulatory issues that companies often take into consideration when developing or modifying policies.
A transparency report is a document released by an internet company that discloses key metrics and information about digital governance and enforcement measures on its platform(s). This chapter covers the history of transparency reports, the types of transparency reports companies may produce, and the challenges and opportunities for companies when developing transparency reports.
Automated Systems and Artificial Intelligence
This chapter unpacks how trust and safety teams build, test, and deploy technologies used for automation, describes common forms of automation, explores challenges associated with developing and deploying automation techniques, and discusses key considerations and limitations of the use of automation. Because many tools—particularly the more sophisticated models designed to spot policy-violating content—rely on AI, this chapter also discusses potential biases in AI models.