Fundamentally, trust and safety is about enabling users to have the best experiences possible with a product or service. It is about creating an environment or experience that proactively and affirmatively helps users achieve their objectives, whether connecting with friends, selling craft goods, getting from point to point, or booking a family vacation. A critical component of achieving that objective is preventing, detecting, and responding to abuse. Depending on the company, trust and safety may also include helping the company live up to its declared values, such as advocating for users’ freedom of expression.
Not all trust and safety teams are the same. They may look different from company to company, as each will have different goals, missions, resources, and activities. How a company defines its trust and safety needs is largely dependent on various factors (including but not limited to):
- The type of product or service (e.g., social media network, digital marketplace, search engine);
- The types of abuse, misuse, and disruptive conduct the company must address;
- The set of values that the company upholds;
- The demographics of its customer base;
- The countries in which it operates;
- The size of and maturity level of the company.
For example, at Google, trust and safety largely emerged from an effort to detect and prevent actors seeking to “game the algorithm” for search ranking (usually for commercial purposes) as well as efforts to determine and enforce policies for acceptable advertisements and business partners. For Facebook and other social media companies, trust and safety focused on tackling harmful and unwanted UGC, including pornography that would populate a user’s profile or feed. For e-commerce marketplaces like Amazon, trust and safety teams were often built to respond to fraudulent or harmful listings. Finally, for review sites like Yelp! and Tripadvisor, trust and safety teams were created to respond to fraudulent, fake, misleading, or non-topical user reviews and listings.
These examples don’t follow a standard pattern of trust and safety development or structure. In fact, some trust and safety teams are created at product launch while, in other instances, trust and safety teams are formed in reaction to user or customer complaints, media reports, or pressure from partners, regulators, or advertisers. Start-ups and small platforms with few services and tight resourcing need to design their approaches to trust and safety with care as their organizational resilience is often smaller and their relative opportunity costs more significant compared to bigger businesses.
Furthermore, organizations with multiple products and services might have a single, centralized trust and safety mission or several disparate sets of trust and safety teams (serving different products and services) with different missions. These differences translate to different organizational structures, remits, and policy/enforcement regimes—even within the same organization.
For example, Google’s approach to trust and safety differs dramatically between its search product and its ads products. One of Google Search’s core values is to “maximize access to information” and thus, to the extent possible, Google Search aims to keep information openly accessible and to “only remove content from [their] search results in limited circumstances, such as compliance with local laws or site owner requests.” While Google Search still requires a trust and safety team to assess and respond to government and site owner requests, as well as remove (or de-index) other sites that violate a narrow set of content policies, the breadth of its policies is limited to honor its core value. By contrast, Google Ads, a product where advertisers bid to display ads, takes a different approach to trust and safety. Google Ads has a more comprehensive set of policies, ranging from prohibited content (i.e., hate speech) to editorial and technical requirements trust and safety plays a fundamental role in creating and enforcing.
No matter the overall mission, trust and safety teams must always balance multiple and often competing objectives. Protecting users and ensuring trust in products may come into conflict with other company objectives, such as product growth and marketing, as well as company or societal values, such as free expression and user privacy. Policy and enforcement decisions that protect end users, for example, may negatively impact developers or advertisers by making the process of publishing an app or an ad more onerous. Balancing competing—and, at times, opposing—needs can pose a particular challenge for a technology company seeking to serve multiple audiences that include end users, who seek a safe experience when interacting with the service, and the constellation of platform partners, creators, developers, advertisers, and others that build on, advertise on, or interact with a service.