Stakeholder Engagement Manager, Product Policy

  • Individual Contributor
  • People Management
  • San Francisco, CA, US
  • This position has been filled

Website openai OpenAI

This content was reproduced from the employer’s website on May 1, 2023. Please visit their website below for the most up-to-date information about this position.

About the Role

We are looking for a Stakeholder Engagement Manager on the Product Policy team to build partnerships and relationships with external stakeholders on issues related to policy and safety at OpenAI and in the generative AI space. This role will be primarily responsible for helping Trust & Safety to improve our policies, our policy approach, and the safety of our AI models based on feedback from external stakeholders. This role will be the primary communicator of our policies, our policy approach, and launching safe AI tools generally to civil society, advocacy, and other external stakeholder communities, and will contribute significantly to societal discourse on the risks and opportunities posed by generative AI.

The ideal candidate is an exceptionally strong and poised communicator and has experience engaging with a range of stakeholders on technology-related topics. The ideal candidate has experience working in or closely with product policy or public policy, preferably in the tech policy space. We’re also looking for a candidate with deep relationships with civil society stakeholders and significant experience working to understand and prioritize the unique needs of and challenges to marginalized communities.

This role is based in our San Francisco HQ. We offer relocation assistance to new employees.

In this role, you will:

  • Drive improvements to model safety, policies, and processes based on insights and information from external stakeholders
    • Track and solicit external perspectives on OpenAI’s policies and approach to safety, and work with colleagues to improve our approach based on this feedback
    • Lead external engagement and support research collaborations to inform product policy development, mitigations, and model safety, working closely with our Policy Research and Public Policy teams
    • Maintain a pulse on public dialogue related to generative AI, OpenAI, and AI safety
    • Engage with civil society organizations, trade organizations, industry partners, researchers, academics, and the broader public on trends, risks, and research related to the use of generative AI tools, with a focus on impacts to marginalized communities
    • Develop methods to obtain broad public input on OpenAI’s safety policies and processes
  • Be the primary external representative of OpenAI’s Trust & Safety efforts
    • Strategize the Trust & Safety team’s external presence at conferences, workshops, roundtables, and other external fora
    • Develop and execute a strategy for providing regular public transparency on our Trust & Safety efforts
    • Develop the Trust & Safety team’s external communications, in close partnership with our Communications team
  • Identify opportunities for our tools to have a positive impact, particularly in marginalized communities, and drive internal collaboration to enable those

You might thrive in this role if you:

  • Have experience articulating and communicating policies and their reasoning to varied audiences, including customers, researchers, and civil society organizations
  • Can understand and clearly articulate how AI models are developed, trained, and refined
  • Are familiar with policy and safety/responsibility questions related to AI specifically
  • Have experience engaging with a wide range of stakeholders on tech policy matters
  • Have experience driving product and/or policy changes based on stakeholder input, especially at technology-focused companies
  • Are comfortable engaging with stakeholders that strongly disagree with you
  • Can analyze the benefits and risks of open-ended problem spaces, working both from first-principles and from industry best practices

Note that this role involves grappling with questions of sensitive uses of OpenAI’s technology, including at times erotic, violent, or otherwise-disturbing material. At times, this role will involve engaging with such content, as may be necessary to inform our policy approaches.