Adversarial Planning Lead, Trust & Safety

  • Individual Contributor
  • Sunnyvale, CA, USA, San Francisco, CA, USA, or New York, NY, USA
  • Experience level: 5+ years

Website Niantic

This content was reproduced from the employer’s website on February 20, 2022. Please visit their website below for the most up-to-date information about this position.

The Trust & Safety team at Niantic and the broader Operations department serves a critically important role in defining, implementing, and scaling outstanding policies and processes in service of our player, advertiser, and emerging development needs. Are you excited about the opportunity to drive the core values of the rapidly approaching next generation of computing? Do you have a passion for user privacy, responsible use and inclusive development processes? Do you want to help proactively identify and mitigate the potential negative impacts of AR technology on societies? If so, this role may be a great fit.

As Adversarial Planning Lead, you will design and conduct threat assessments with our game and product teams, lead red team exercises and other threat ideation work to ensure we address potential harms to our users. The right leader will bring creative thinking to identify and proactively mitigate socio-technical harms. You will enable Niantic to lead in the new field of XR safety and reduce the likelihood that our platforms will be used to cause harm. You will help the team develop and deploy a vigorous, innovative, foresighted approach to Trust & Safety and platform integrity.

Responsibilities

  • Work with Niantic’s product, games, legal, and security teams to assess risks to Niantic and our users across games, products and features, and design and propose appropriate mitigations.
  • Lead the development of a flexible process to lead our games through socio-technical risk assessments.
  • Research, design and apply new risk discovery processes and methods.
  • Collaborate with cross-functional teams and product teams to align partners on risk mitigation proposals.
  • Document and catalog risks discovered during assessments, centralize lessons and successful mitigations for feature-specific risks at the company level.
  • Identify external partners and vendors to help scale processes and to ensure appropriate subject matter expertise is leveraged when need be.

Qualifications

  • Master’s Degree (or higher) from an accredited institution in Public Policy, Design, Security Studies, Computer Science or other relevant fields.
  • Experience in content moderation and Trust & Safety, both in crafting and carrying out policies.
  • Expertise in one sub-areas of content policy, such as hate speech, CVE, child safety, etc.
  • 5+ years experience in Trust and Safety, information security, or cybersecurity.
  • Strong critical thinking and problem solving ability to effectively analyze data, regulatory requirements, and product limitations.
  • Excellent collaborator with ability to maintain strong internal and external relationships. Experience working with technical teams and translating concepts across fields.
  • Familiarity with the product development cycle, and with adversarial planning processes.
  • Demonstrated passion for gaming and/or digital safety.
  • Excellent written and verbal communication skills, strong ability to engage different audiences and to communicate risks across fields and functions.
  • Experience with design processes and methods that help surface and address risks to vulnerable populations and marginalized communities.
  • Familiarity with relevant research fields, expertise and applicable bodies of work externally.

Plus If…

  • Familiarity with mapping, augmented reality, and gaming.
  • Motivated by Niantic’s goal of using technology to get people outside into the physical world to experience real world social interaction.

To apply for this job please visit careers.nianticlabs.com.