Balancing Safety & Privacy

Trust and safety teams may need to navigate certain tradeoffs with privacy in the contexts of both user-generated content (UGC) considered abusive of privacy, and internal teams’ privacy policies and operations to effectively prioritize community or public safety or to follow relevant legal and privacy requirements. The balance between safety and privacy is frequently carefully weighed by safety teams in various contexts, and on different scales. 

At a “macro” level, privacy and safety must be balanced with the bigger picture of establishing policy, processes, guidelines, and best practices that may be followed widely by an entire platform safety team or built into platform safety infrastructure. On a “micro” level, an individual safety agent may need to make decisions on specific user cases requiring thoughtful or subjective consideration of how best to uphold privacy principles when there is no robust policy or past precedent to guide them.

While users may assume that their private conversations will not be monitored by any other person or entity, platforms often reserve the right in their terms of service and other policies to review, remove, or take action on any type of content uploaded by users to the platform, whether the content is publicly visible or shared in a private context. As trust and safety work often requires an in-depth review of user content to investigate violations, trust and safety teams may undertake reviews of private conversations between users, either through manual reviews for occasional one-off investigations, or ongoing human monitoring or filtering by automated tools of some or all platform content surfaces. 

There are many factors in deciding how trust and safety and product teams balance privacy safeguards and content review. Professionals must consider the following: the platform’s risk appetite, how users engage on the platform, and the types of content shared. For example, if a platform allows users to share images and videos in direct messages, there is heightened risk that illegal material, such as child sexual abuse material (CSAM), may be shared.

Platforms must also navigate privacy considerations while complying with legal requirements in various jurisdictions to retain and disclose user content or user data for the purposes of law enforcement and government investigations. 

Ongoing challenges for platforms with maintaining the balance between users’ privacy rights and interests and user safety are highlighted sharply within the current discourse in the safety space around platforms intended to offer users a fully private communications experience, including end-to-end encrypted platforms and ephemeral content platforms. Users may expect—or may misinterpret—from these platforms’ terms, policies, and publicly available information sources like knowledge bases and marketing materials, that data they share on the platform will be permanently deleted or limited through encryption from review by any other party other than the conversation participants. 

However, platforms may have conflicting legal obligations and resulting operational activities requiring them in practice to retain, investigate, or disclose content to law enforcement. While a platform’s policies and activities relevant to privacy, such as data retention practices for content and personal data, should be encapsulated and transparently disclosed within their terms and policies, including the platform’s privacy policy, the reality is that platform terms and policies may not be accessible enough to be very visible to many users, or policies may be written in language or otherwise be presented in such a way that is difficult for most users to review and understand.

In short, balancing safety and privacy is challenging. T&S professionals must be mindful of a host of factors related to their platform’s features, how users interact and the content they share, and how the platform approaches data privacy and encryption.