Policy Development

How do trust and safety professionals determine appropriate behavior when it comes to people’s privacy? Privacy considerations in policy development often fall into one of two buckets: (1) privacy violations in user-generated content (e.g., someone posts personal or private information of another individual), or (2) privacy issues in how the platform or organization handles user data (including proactive monitoring for abusive content via manual or automated review).

Content Policy

There are several factors that go into policy development (see Considerations When Creating Policies). Here are a few to consider for privacy policies: 

Define Privacy

What is considered to be “privacy” or “personally identifiable information”? Common considerations include: 

  • The identifiability of an individual
  • Their expectations of privacy 
  • Whether they belong to a vulnerable group 
  • Whether they’re a public or private figure

Privacy can be highly context-dependent. For example, a photo of an individual posted online that doesn’t include their face makes that person less identifiable. If the photo does clearly show their face, but they’re a well-known celebrity or public figure, it might be argued that they have placed themselves in the public eye, and therefore should have a less reasonable expectation of privacy. If the photo is of a famous person in their bedroom in their private home, the expectation of privacy is heightened. If the photo is of an individual that’s clearly identifiable in a public space, but that person is a minor, additional consideration of their privacy is warranted to protect their safety.

Defining Privacy

In this example, YouTube clearly defines what they mean by privacy as a part of their Privacy Guidelines.

Define Abuse

What is considered “abusive,” and how will this influence an enforcement or punishment ladder? Some users may share private information with the intention to abuse and harass the PII holder (doxxing), while others may share private information, including their own, with no abusive intent. Deciding how to determine intent and how or if that influences the action taken against the posted PII is an important consideration for any trust and safety team. 

Balance Values

A platform’s core principles, product goals and user expectations can also impact the scope of content that is disallowed. 

  • Core principles
    • If a company has made privacy a key component of their product, it’s important that the content and product policies developed reflect these principles. 
  • Product goals
    • Privacy by design means that products and features should be built with privacy at the forefront. Policy teams should work to advise and partner with product and engineering teams to meet this end. Defining what constitutes “personally identifiable information” should also be a goal of policy. This will differ depending on the core principles of the platform and the core product.
  • User expectations
    • What promises have been made, and what expectations of privacy do users have? Expectations of privacy are often influenced by what’s already happening in the industry. A great example of this is in direct or private messaging. Users will have an expectation that their conversations will be more protected and less heavily scrutinized by platforms, so the policy must take this into account. 
    • This can lead to new tools being developed that empower the user to control their own experience while using a particular service or feature, such as blocking and muting.

Anti-Abuse Tools Example

Here is an example of a product feature introduced by Instagram that filters offensive words and terms from incoming DMs.

References to Common Frameworks & Practices:

Legal landscape 

Globally, regulators have become more proactive in instituting privacy laws and regulations, creating what some refer to as “global privacy regimes.” These range from the General Data Protection Regulation (GDPR) and the Data Services Act (DSA) in the European Union to the Digital Personal Data Protection Act in India, to the U.S. Data Protection Laws in the United States, such as the California Privacy Rights Acts (CCPA) and the Children’s Online Privacy Protection Act (COPPA) to many others. These regulations provide the “floor,” from which content and product policies can be built. They may also act as a forcing incentive for product and engineering teams to build out privacy tooling and implement “privacy by design” practices. 

Reporting

Establishing clear guardrails around reporting is important for scalability, accuracy in enforcement, and user satisfaction. Here are a few important things to consider: 

  • Reporter: Who has the right to report a violation? Does it matter if the person reporting is the PII owner or not? How might this impact the enforcement or triaging of reports? For many platforms, users who come across content they feel is unsuitable are able to file a report. When considering private information, some companies may require the owner of the PII to make a first-person report, whereas others will allow anyone to file a report, sometimes referred to as “bystander reports.” Who is reporting may impact the amount of information in the report, making it easier or more difficult to accurately assess the situation. 
  • Representatives: First-party or first-person reports can be made by the owner of the personally identifiable information that’s been exposed or their legal representative (e.g., parent, attorney, etc.). When it comes to these types of reports, verifying the identity of the legal representative is important to prevent users from attempting to “game” the reporting system. How this is done varies across companies. Some will request the representative to present a legal document stating that they are able to act on behalf of the individual. Others will request a copy of an official photo ID that connects them to the person they’re reporting on behalf of. 
  • Report requirements: Deciding what information should be included in a report for it to be actionable will support a clearer and speedier reviewal process. Companies differ on how transparent they are with sharing this information externally, but there should be internal guidance on what information is required to accurately assess a report. For first-party private information reports, this may mean requiring proof that the reporter is the PII owner or requiring context around whether the violator shares their PII with abusive intent or not. When establishing this criteria, teams should be mindful of not placing the burden of proof solely on the reporter and look to see if there are internal signals or additional types of information that can be used in the reviewal process as well.
  • Appeals: Mistakes will happen, so it’s important to have an appeals process where violators are able to provide additional information or plead their case pursuant to the Santa Clara Principles. There are plenty of reasons why someone may post another person’s PII (or their own) that are not related to abuse. Reviewers may be more likely to grant an appeal if the violator’s intention was not to be abusive. Policies exist to educate users not to punish them, and an appeals system allows this education to take place; making sure users are aware of the rules and granting them the opportunity to learn from their mistakes.
  • Governance: Policy teams are responsible for writing the rules of engagement, and operations teams enforce them, but there may be other teams or individuals involved in removal decisions, especially if the case is particularly sensitive or involves a high profile user. These parties can range from boards (like the Meta Oversight board) to community moderators (like Wikimedia or Reddit) to external subject matter experts from academia or non-governmental organizations (NGO).

Platform Governance

In addition to Meta’s internal governance work, they receive independent recommendations from the Oversight Board, which considers appeals and reports related to content appearing on Facebook, Instagram, and Threads.

Privacy Edge Cases

When developing internal and user-facing privacy-related policies, platforms may encounter privacy or safety edge cases requiring the need to debut and “stress test” initial policies. Anticipating that further refinement will be needed after more data and cases have been collected and analyzed, platforms may also want to create special carve-outs for edge cases. Trickier examples and/or case studies when platforms may encounter challenges with policy development include:

  • Screenshots: The universality, accessibility, and culturally (and often legally) accepted nature of users being able to screenshot content that is available to them online, save it to their devices, and share the content in many contexts—whether that content should be considered “public” or “private”—results in various questions, gray areas, and edge cases when platforms attempt to develop realistic and enforceable privacy policies related to perceived privacy violations in screenshots.
    • Profile screenshots: Users may share screenshots of other users’ profiles on and off-platform for a variety of reasons, some innocuous—for example, a user may share a screenshot of another user’s dating profile with a friend in the context of getting dating advice. In some contexts, social or dating profiles may, to some extent, be considered “public” content within the context of the app—for example, they may be searchable or discoverable by any other user, but may not be searchable on the web—and fall into a gray area as far as whether they should be considered “private” content and, accordingly, whether users should be able to copy or scrape information of the profile to share on or off-platform.
  • Conversation screenshots: On-platform private (e.g., “direct message”) conversations—for example, between two users—may carry an expectation of being kept private. 
  • Use of individual’s likeness without consent: There is often no legal basis prohibiting the use of a user sharing visual content that depicts another individual’s likeness online, absent some other defined illegal or otherwise harmful context also being present within the visual content. The legality of photos being initially taken and consequently shared, for example, varies widely according to region, the expectation of privacy that the photo subject would have had, and who holds the intellectual property rights to the photo. Accordingly, platforms may face challenges with taking actions (such as limiting access, taking down, etc.) on photos or videos depicting individuals which are shared without consent, and consequently are reported by the individual depicted. In these cases, however, the content appears to otherwise lack a defamatory, harassing, privacy-violative, or negative context, as this may not be within the scope of the privacy policy. Examples include:
    • Memes without consent: An individual whose likeness has been used in a meme without consent currently generally has limited legal or other avenues to prevent the dissemination and monetization of their images online, outside of pursuing lawsuits, which are often unsuccessful depending on whether the meme can be interpreted as “false” or “damaging” and the intellectual property rights the individual holds to their own image. Platform teams may be limited from actioning meme takedown requests from depicted individuals within the scope of their policies, particularly when a meme does not appear to have a negative context, or when a meme has been so widely disseminated that it has changed meaning over time.
    • Journalistic and public interest content: When it comes to photography captured in public spaces, photographers generally have more generous rights to take and disseminate photos, without obtaining specific consent or other consideration from photo subjects captured in the public setting. As a result, conflicts may arise on platforms over public photography, such as news-related content or public interest content which may include depictions of natural disasters, civil unrest, war, crimes, or other current events where subjects may be depicted as particularly distressed or vulnerable, but may not have consented to being depicted in the first place or having their image consequently shared online.
    • Deepfakes: The production of deepfakes can have dire consequences for the safety and privacy of the individuals impacted by them. They can be as innocuous as images or videos created for the purpose of entertainment, or as dangerous as used to spread dangerous political misinformation, inappropriate imagery and scenes, or even depict criminal activity.

Edge Case Policy Examples

Within many of Meta’s community policies related to privacy violations, reports must be submitted by the person whose privacy has been violated. In the case of graphic violence content depicting an individual’s death, a takedown request may be submitted by a family member on the individual’s behalf.

Example: Policies related to depictions of minors

The act of simply posting a photo of a minor online, where the depiction is not CSAM, or otherwise lacks a harassing, threatening, or otherwise negative context, may be considered lawful in many regions. However, some platforms, including Instagram, have a policy to remove images of minors shared by others upon receipt of a legitimate request from a parent, particularly where a minor appears to be under 13, potentially in consideration of minor privacy laws, or a lack of parental consent in regards to the depiction.

Product Policy

Data Handling Considerations

Handling user data is an inevitability and a necessity for trust and safety teams. UGC is itself a form of user data, is collected, stored, retained, and controlled by the platform, and where necessary is reviewed and actioned by safety teams. 

But UGC is generally linked inextricably within data storage systems, moderation review platforms, user databases, or any other tools that platform teams may use in the context of business activities, to various different types of data—from the user account associated with sharing the content; the account’s history, activity and actions on the platform; personal data the user has provided to the platform; and a great deal of other data and metadata which may exist, associated either with a user, an item of content, or the surfaces within a platform where users are interacting or content is being shared.

Trust and safety teams’ handling of user data is often motivated by a valid business case, compliance reasons, and may be perceived as critical to protect user or platform safety. For example, data handling activity may be relevant to day-to-day content review or network investigations, undertaking data analyses that can help improve product safety, or fulfilling compliance requirements such as transparency reporting. 

Nonetheless, trust and safety teams have as much responsibility to protect user safety and privacy through data handling activities happening behind the scenes as they do in their efforts to protect user experience through on-platform moderation. This goal should accordingly be infused into internal practices related to user privacy.

In many cases, such as where a team may be handling or accessing personal data or sensitive data , it may be legally required or, at minimum, strongly recommended for privacy compliance to undertake robust efforts to develop privacy-centric operational policies. It’s critical to understand the underlying regulatory requirements that apply to a platform, which may vary depending on the region(s) where a platform operates, the platform size or regional audiences, and processing activities, before beginning new data collection activities, or changing how existing data systems work. Certain laws, such as the EU’s GDPR, establish a “privacy by design” approach to user protections, establishing specific handling, retention, transparency, auditing, and user control requirements defined for user data, encompassing concepts such as data controller and processor roles and responsibilities, account access/deletion/portability, and automated decision-making and profiling, age of privacy consent, and overarching user safety. 

A platform’s overarching community values, philosophy, and user experience goals can help inform the development of robust and user-forward privacy policies. Additionally, finding alignment between a community’s existing privacy design and culture and global privacy rights frameworks such as the UN’s Privacy Rights framework, industry best practices for transparency and control, and privacy by design practices can also help platform teams to develop and refine community-fit internal and user-facing privacy policies.

Building policies and operational processes related to internal data handling is an ongoing cross-functional effort. Privacy and Legal teams may proactively lead the charge on advancing platform efforts in trust and safety data access and maintenance activities with self-auditing and generally are key partners in defining any legal requirements relevant to data handling, and Engineering and Data teams are often also instrumental partners in answering relevant questions related to data structures and security.

Data Handling: Self-Auditing Checklist

Prior to handling any user data for a trust and safety use case (whether that means collecting, accessing, retaining data, or taking actions in response to data), particularly when it comes to new data collection, it can be key to internal records maintenance and generally a good practice to self-audit before starting data processing activity during the planning phase, and to document the business case for using the data. This should include relevant details related to data handling, including the scope and relevance of the data, as well as the privacy and security considerations related to the data handling. 

The following example checklist can help guide self-auditing and documentation related to developing and refining internal team privacy policies and data handling in the trust and safety context. 

  1. What: Defining the data and its scope and relevance
  • What does the data consist of? What is the scope of the data (e.g., individual items of data) being collected/detailed? 
  • Has the data collection and processing activity been appropriately minimized in alignment with the business case and applicable privacy risks? (Does the scope of the data align with the business case for holding the data?)
  • Is the data considered “content”? Is it considered personal data? Is it considered sensitive data, biometric data, or another category of data which may have privacy implications?
  • Who is defined to “own” the data (according to site policies or relevant laws)—the user, the platform, and/or another party?
  1. Who: Who are the parties associated with or handling the data?
  • Who are the data subject(s) associated with this data? Are there any considerations for them as data subjects (for example, minors)? 
  • Who within the platform team should have access to the data from a business case perspective? Who will have access to the data (ex. T&S teams, data teams, other teams)?
  • Are vendors or other third parties (for example, BPOs, vendor platforms, or data providers) involved in collecting, accessing, or managing the data?
  1. Where: Security & access considerations of the data
  • Given that there may be regional considerations for handling data, where (hosting country/region) is the data stored? Could any associated activity fall under “transferring” the data across borders?
  • Where (in terms of platforms) is the data stored? 
  • Who has access/permission to access the data? What are the security and access controls for this data?
  • What is it possible to do once this data is accessed? Are the actions “reversible”? Are there any destructive and irreversible actions associated with access and potential action on this data?
  1. When: Data retention
  • What is the data retention policy for the data? How does it align with retention policies for similar data on the platform?
  • Does the data retention policy and handling and usage of the data align with other platform policies and practices?
  • Does the data retention policy align with industry practices and incorporate applicable regulatory requirements (including any requirements specific to regions where processing activity is occurring)?
  1. Why: Justification for data processing
  • What is the business case for collecting, handling, accessing, and/or retaining the data?
    • Is this data already collected, maintained, retained, or accessed for an existing (separate) business purpose, or will it be collected solely for this purpose?
    • How will this data be used? 
    • How will this activity improve user experience or user outcomes (or other defined business purpose relevant to user safety)?
  • How does this business case align with industry practices, industry norms, and best practices? 
  • Are there any regulatory considerations or conflicts with this data handling?