There are fundamental issues and areas in which all trust and safety teams engage. These chapters describe these various areas, discuss how companies generally approach these issues, and explore the considerations that many trust and safety professionals take into account.
Industry Overview
In this chapter, we trace this evolution, starting with a nascent field focused on catching and removing spammers and trolls to a complex global infrastructure involving moderating trillions of bits of data, managing millions of users, and hiring thousands of trust and safety practitioners. We explore key elements that make up trust and safety as well as different approaches and their tradeoffs. We also discuss the importance of where a trust and safety team fits within an organization.
Creating and Enforcing Policy
Developing policies to prevent, address, and remediate abusive behavior on digital platforms is a core practice within the field of Trust and Safety. This chapter covers how policy are developed, and the different approaches companies take when developing policies. It describes the different types of abusive behavior that violate policies and the different methods companies use to enforce their policies. The chapter concludes with a review of regional differences and regulatory issues that companies often take into consideration when developing or modifying policies.
Content Moderation and Operations
Content moderation is the process of reviewing online user-generated content for compliance against a digital platform’s policies regarding what is allowed to be shared on their platform versus what is not. The process of moderating content and enforcing policy is either done manually by people or through automation, or a combination of both, depending on the scale and maturity of the abuse and of a platform’s operations. This chapter focuses on different approaches to setting up content moderation teams, how to ensure they’re successful, user appeals, and relevant metrics.
Transparency Reporting
A transparency report is a document released by an internet company that discloses key metrics and information about digital governance and enforcement measures on its platform(s). This chapter covers the history of transparency reports, the types of transparency reports companies may produce, and the challenges and opportunities for companies when developing transparency reports.
Automated Systems and Artificial Intelligence
This chapter unpacks how trust and safety teams build, test, and deploy technologies used for automation, describes common forms of automation, explores challenges associated with developing and deploying automation techniques, and discusses key considerations and limitations of the use of automation. Because many tools—particularly the more sophisticated models designed to spot policy-violating content—rely on AI, this chapter also discusses potential biases in AI models.
Trust & Safety and Law Enforcement
In this chapter, we discuss how law enforcement works, how to process and respond to incoming legal process and emergency requests while distinguishing between reasonable and potentially overreaching law enforcement requests, how your organization may choose to make proactive law enforcement referrals, teams and roles associated with law enforcement response, and the opportunities and challenges T&S professionals may face when working with law enforcement.
How Trust & Safety Teams Use Data
The aim of this chapter is to help the reader achieve a basic understanding of how data is gathered, used, and communicated by trust and safety practitioners.
Legal & Regulatory Considerations
The goal of this chapter is to provide a general overview of key legal and regulatory considerations. The intended audience includes newcomers to Trust & Safety (T&S), company founders unfamiliar with governance, risk, and compliance (GRC) challenges, and maturing teams addressing arising legal and regulatory concerns. This is an immense topic. For the sake of simplicity, this chapter focuses primarily on the EU, the U.S., and India as key markets.
Investigations, Intelligence & Risk Mitigation
Given the focus of the T&S Curriculum, this chapter provides an overview of investigation and intelligence processes as scoped to the field of trust and safety, primarily in the internet and technology space and including policies, product development and usage, Community Guidelines, and Terms of Service.
External Engagement
External engagement is the process by which a platform seeks input, advice, and feedback—or conducts formal research—with individuals or organizations outside of the company. The goal of this chapter is to explain why external engagement is necessary, identify with whom T&S teams may engage, provide tips for facilitating external engagement, and note important considerations.
Safety by Design
Safety by Design (SbD), like similar design paradigms such as Privacy by Design or Security by Design, suggests that organizations consider risk of harm to a user or stakeholder group before they occur, rather than after the fact. The goals of this chapter are twofold: (1) to help readers gain an understanding of the SbD approach and standard frameworks for assessing risks of harm; (2) to provide an overview of implementing a SbD approach within an organization.