Policies play an essential role in establishing guidelines and standards that users and third parties must follow when interacting with a service or product. They are also essential in ensuring a company complies with all applicable laws and regulations, creating a safe environment, and protecting users from harmful behavior and content. Privacy policies usually define how the company itself collects, processes, and stores data obtained from its users or clients. Depending on the service offered, some companies may also establish privacy policies that define what responsibilities users, customers, and third parties have when collecting or sharing data from and to a specific product or service.
However, policies alone do not protect users from harmful behaviors. As a result, a growing number of companies are investing in the development of processes and tools that support the proactive monitoring and enforcement of such policies in case of non-compliance. This section is an exploration of the concepts of detection, prevention, and approaches to enforcement, as well as examples of privacy violations.
Approaches to Enforcement
Who should be able to action content that violates a platform’s privacy-related policies? The answer depends greatly on the platform, its policies, and its structure of governance, including its balance of platform intervention or community moderation tools. Policies and actions related to privacy also vary greatly depending on the real-world harm or other consequences of the privacy-related abuse, and its resulting impact on user experience.
For many platforms, making judgments on potentially privacy-violative content, and even defining privacy as a distinct policy category, can prove to be a very gray area issue because of the subjectivity of assessing privacy abuses, and inherent overlap with other policy areas. Example policy areas that overlap or intersect with privacy issues include sexual exploitation (example that overlaps with a privacy-related abuse: sharing NCII), doxxing (sharing others’ personal information without consent), harassment (example: sharing another user’s unflattering image without consent to mock them), impersonation (example: copying someone’s profile photo and representing them on the platform without consent), or intellectual property or image rights (example: using a celebrity’s likeness without their consent).
In some cases, platforms may make user controls and other self-serve tools available for users to control their own privacy or likeness. One example common to many photo-sharing platforms is untagging features for users who want to remove tags or otherwise disassociate themselves from photos, videos, or other content that other users may upload, tag, or link to. Another privacy tool more relevant to sexual exploitation is “revenge porn” hash-sharing services like StopNCII, allowing victims to upload exploitative images of themselves proactively to facilitate their automated takedown if they are uploaded on the platform in the future.
Where users cannot take steps to protect their own privacy, platform trust and safety teams may carry the burden of enforcing policies and assessing user reports related to privacy. Platforms may also engage proactive and/or automated moderation solutions to detect privacy abuses—for example, photo detection to detect potentially harassing images of individuals shared without their consent, or machine-learning-based assessment of patterns in the text to detect phone numbers, addresses, or other personal information being doxxed.
Assessing privacy violations is often not a straightforward process for safety teams. Confirming context and the validity of evidence are critical for privacy violations, and can be difficult to nail down in the course of investigating without a great deal of upfront context being provided. For example, if a phone number is shared by a user on a platform on a public surface, more context would be needed to confirm doxxing in case a user has voluntarily shared their own phone number, which may not be violative within platform policy.
Given the particular gray areas with moderating privacy abuses, providing transparency accompanying trust and safety actions is recommended, such as providing an in-app or email notification to the uploader that clearly outlines the privacy-related nature of the violation, details of any specific privacy sub-policy violated, and where possible, the specific content or interaction that violated a policy, and user education related to a platform’s privacy policies. Offering a mechanism for appeals may also be a helpful avenue to clarify edge cases. Appeals mechanisms can also help to inform teams about trends with privacy abuses specific to the platform, as well as learnings for how to develop better and more specific privacy-related policies for a platform community.
Detecting, Reacting to, or Preventing a Privacy Violation
Detection can be defined as the discovery of something that is supposed to be concealed, whereas a violation is an act that breaches a rule. The concept of policy violation implies that a decision-making process occurred: either a human or automation determined that an action is effectively breaching a policy. As technology products keep scaling and expanding their user base, detection plays a crucial role in ensuring policy violations are caught at an early stage, or flagged before someone notices and reports them. So, how do companies address the challenge of catching policy violations at scale immediately when they happen?
Proactive detection, covered in depth in Automated Systems and Artificial Intelligence, helps companies identify, isolate, and mitigate data misuse and privacy content violations before a violation gets reported. Examples of proactive detection include leveraging machine learning and predictive models that can alert trust and safety teams about anomalies with regard to data access or violating content shared on a platform. Reviewers also play an important role in proactive detection, for example, by performing sweeps of suspicious content before it goes live, or by labeling text or content to train machine learning models.
On the other hand, reacting to a policy violation is a more straightforward process. This approach focuses on taking quick action to mitigate the adverse consequences of a violation that has been brought to the company’s attention, and taking steps to prevent its recurrence. Reactive enforcement processes can be triggered by a variety of sources: some companies enable their users to report a potential content violation, which will then be assessed by a reviewer or automatically. Media reports and privacy watchdog reports represent other avenues for companies to be notified about potential privacy violations. Some companies may also set up “bug bounties” to enable security researchers to report privacy vulnerabilities or other product policy violations. Lastly, internal reports from teams such as sales, product, or other trust and safety teams represent a good intake source for teams investigating policy violations. The challenges with these referrals vary depending on the entry points: user reports could lead to a large volume of content that needs to be prioritized and reviewed; they may also lead to signal noise and false positives. Media and privacy watchdog, as well as bug bounty reports, usually lead to a higher number of true positive violations, and they require urgent attention due to the likelihood and potential risk to users and the company’s reputation.
Common Prevention Measures
In addition to detecting and reacting to privacy violations, companies can invest in prevention. Companies sharing their user data with third parties are encouraged to establish:
- Robust privacy programs and third-party assessment that can mitigate the risk of user data being misused.
- Processes for vetting third-party partners for compliance before sharing data with them, frequently reviewing and monitoring the third-party data practices, and assessing their data security standards are also practices that can mitigate risk proactively.
- A “know your customer” (KYC) program to vet users and customers before they start using a service. This will ensure the customer provides authentic information about their identity, and that they agree to the product terms and policies, which establish responsibilities and consequences in case of a policy violation.
- User education resources to support risk prevention, for example, can reduce the likelihood of privacy risks such as phishing and content privacy misuse.
- Processes for building products with privacy in mind, with user experience designs that clearly highlight user consent and explain where the data is going, who is receiving it, and how users can enforce their data rights. This can reduce the likelihood of users inadvertently oversharing their data, strengthen the trust in the product, and also enable users to better understand their rights to privacy.
Enforcing on Privacy Violations
Building on the chapter Creating and Enforcing Policy, privacy-related violations may result in a number of enforcement actions, including removal of the content, temporary access or temporary feature suspensions up to a permanent ban, and access removal from the platform for egregious violations. These actions are informed by policies which may be formed around the following privacy principles:
- Proportionality: The enforcement action should be proportionate to the harm impacting the product and its users. Egregious violations may result in a more severe action, whereas less severe actions may lead to temporary suspensions or warnings.
- Consistency: All the criteria used to make a policy violation determination should be objective and consistent across products and time; they should not vary depending on who makes the determination.
- Transparency: Requirements should be clear to users and customers, to ensure they can all comply with the policies.
Oftentimes, trust and safety teams pivot their focus on bad actors, forgetting that policy violations can also be the result of negligence from benign actors who are unaware or incapable of meeting specific policy and technical requirements set by the company. In those cases, education and remediation play an important role to ensure bening actors can be re-onboarded on the platform after remediating a policy violation.
Examples of Privacy Violations That May Lead to Enforcement
Policy violations related to privacy vary depending on the type of data, the product or service, and the type of omissions identified. At a high level, two types of violation can be identified: privacy content violations and data misuse violations.
Privacy content violations are related to the sharing or soliciting of content that violates personal or confidential information about individuals. The content usually violates privacy policies if it shares, offers, or solicits personally identifiable information that could lead to user harm, including physical harm and financial harm, amongst others. This content might not be removed if the information is publicly available through news coverage, court filings, etc. Some examples of violations include:
- Content that shares or solicits private information, including but not limited to personally identifiable information, personal identification numbers and identity documents, personal contact information, financial information, residential information, medical information;
- Content including private information obtained from illegal sources or that is hacked;
- Content that violates the privacy rights of minors;
- Content that violates the privacy rights of adults, or adults who are incapacitated and unable to report the content on their own;
- Content that violates applicable laws and regulations.
Data misuse violations are related to non-compliant access, collection, and processing of user data, in any way that violate a company’s policy or applicable laws. Some examples of violations include:
- Disallowed data use practices:
- Selling, sharing, or collecting user data without users’ consent or against product policies;
- Processing user data in ways that violate applicable laws, for example, in ways that violate laws for child-directed services;
- Using user data to provide tools for surveillance of specific people or groups;
- Using user data to discriminate against people, for example, to deny access to benefits such as housing, education, credit, government benefits, and immigration;
- Not deleting user data when all criteria for deletion are met (e.g., retention expired, user request deletion, or the company no longer has a legitimate purpose to retain the data);
- Not providing an easily accessible privacy policy that includes information on how user data is processed, how users can request deletion of data, how to report incidents, etc.
- Insufficient security standards:
- Insufficient or negligent data security practices that may put user data at risk of unauthorized access (e.g., not encrypting sensitive user data or leaking sensitive data);
- Data breaches that impact user data and that are not promptly reported and mitigated.
- Unauthorized collection of user data:
- Phishing or misleading users into providing their credentials or other sensitive or personal user data;
- Hosting or facilitating the distribution of malicious software aimed at stealing or collecting user data without users’ consent;
- Scraping public or private user data without authorization.
The following example policies serve as resources for those considering forming their own privacy policies. Platforms will vary widely in their policies due to the different ways in which user data is collected and stored, and they may categorize their policies to fit the needs of their platform to better meet the concerns they are trying to address:
- Facebook Community Standards – Privacy Violations
- Meta for Developers Platform Terms
- Meta Quest Developer Data Use Policies
- Google Play Policies: Privacy, deception and device abuse
- Apple Developer Policies
- Pinterest Community Guidelines
- TikTok Community Guidelines – Privacy and Security
- X (Twitter) Rules and Policies