This content was reproduced from the employer’s website on September 1, 2022. Please visit their website below for the most up-to-date information about this position.
About the Role
We are looking for Product Policy managers to develop and manage the content and usage policies for our platforms, including the OpenAI API and OpenAI Labs. Providing access to powerful AI models introduces a host of challenging questions: What use cases should be permitted? How do we balance goals of access and broad benefits with possible safety and security threats? How do we effectively communicate the policies with our customers? As an early member of the team, you’ll help shape policy creation and development at OpenAI and make an impact by helping ensure that our groundbreaking technologies are truly used to benefit all people.
The ideal candidate can identify and clarify the most important policies to enact and develop deep subject matter expertise in their policy areas. They can balance internal and external input in making complex decisions, carefully think through trade-offs, and write principled, enforceable policies based on our values. Ultimately, our product policies will be fundamental to achieving OpenAI’s mission of broad benefits from AGI.
In this role, you will:
- Develop specific policies to address new use cases, new issue areas, and new products, based on OpenAI’s values
- Create product policies from first principles that uphold OpenAI’s values and principles
- Prioritize what product policy topics to tackle, in what order, at what level of depth, in collaboration with our product, security and policy research teams
- Develop deep subject matter expertise in your particular policy areas
- Research and weigh the benefits of particular use-cases against their risks
- Iterate policies based on feedback on enforceability and edge cases
- Define safety requirements to unlock further use-cases and continue improving the safety of our technology
- Create internal alignment and work with the broader AI and tech policy ecosystem to adopt and contribute to best practices
- Align internal stakeholders around frameworks and OpenAI’s overall approach to product policy; educate external stakeholders about these matters
- Work with significant partners on product policy questions to promote healthy, effective and consistent ecosystem norms
- Partner with internal and external researchers to adapt our policies based on latest research and best practices, and to define safety requirements for unlocking further uses of our technology
You might thrive in this role if you:
- Have experience developing, refining and enforcing product policies, especially at leading technology- or developer-focused companies
- Have experience aligning product policy decisions between diverse sets of stakeholders, such as Product, Engineering, Security, and Legal
- Have strong customer empathy, and a passion for finding the right balance between enabling developers to experiment and the safe deployment of AI
- Have experience articulating and communicating policies and their reasoning to varied audiences, including customers, internal stakeholders, executive leadership, etc.
- Can understand the operational challenges of enforcing product policies, including in the content moderation space, and can incorporate this to policy design
- Can analyze the benefits and risks of open-ended problem spaces, working both from first-principles and from industry best practices
- Are familiar with policy and safety/responsibility questions related to AI specifically
Note that this role involves grappling with questions of sensitive uses of OpenAI’s technology, including at times erotic, violent, or otherwise-disturbing material. At times, this role will involve engaging with such content, as may be necessary to inform our policy approaches.
To apply for this job please visit boards.greenhouse.io.