Senior Data Analyst, Scaled Enforcement

  • Individual Contributor
  • San Francisco, CA US
  • This position has been filled

Website openai OpenAI

This content was reproduced from the employer’s website on August 29, 2022. Please visit their website below for the most up-to-date information about this position.

About the Team

Trust and Safety is at the foundation of OpenAI’s mission. The team is a part of OpenAI’s broader Applied AI group, which is charged with turning OpenAI’s advanced AI model technology into useful products. We see this as a path towards safe and broadly beneficial AGI by gaining practical experience in deploying these technologies safely and easily by developers and customers.

Within the Applied AI group, the Trust and Safety team protects OpenAI’s technologies from abuse. We develop tools and processes to detect, understand, and mitigate large-scale misuse. We’re a small, focused team that cares deeply about safely enabling users to build useful things with our products.

In 2020, we introduced GPT-3 as the first technology on the OpenAI API, allowing developers to integrate its ability to understand and generate natural language into their product. In 2021, we launched Copilot, powered by Codex, in partnership with GitHub, a new product that can translate natural language to code. In April 2022, we introduced DALL-E 2, AI that creates images from text.

About the Role

As a data analyst on the Trust and Safety team, you will apply your analytical skills and expertise in data visualization to help us better understand and communicate trust and safety concerns related to AI. Using your knowledge of data analysis and visual storytelling, you will help us extract insights from massive datasets and present them in a clear and engaging manner to various internal and external stakeholders. You will be a key player in helping us make complex issues digestible and actionable. If you enjoy solving problems visually and are passionate about data-driven solutions to safety concerns, we would love to hear from you.

As part of the Scaled Enforcement subteam you will help build automated moderation solutions to mitigate abuse of OpenAI’s technologies. This is an operations role based in our San Francisco office and involves working with sensitive content, including sexual, violent, or otherwise-disturbing material.

In this role, you will:

  • Utilize data visualization tools to produce interactive dashboards and reports that clearly communicate insights about AI related Trust and Safety concerns
  • Extract insights from large, complex datasets and present them in an easy to understand manner
  • Proactively suggest data insights that will help the Trust & Safety team to understand and execute against safety goals
  • Collaborate with team members in order to understand data requirements and create visualizations that effectively communicate key findings
  • Collaborate with engineering, policy, and research teams and use data-driven insights to improve our tooling and understanding of abusive content
  • Develop new ways to scale our detection coverage, especially using state of the art large language models and embeddings to improve and automate our coverage

You might thrive in this role if you:

  • Have significant experience in a technical analysis role or have experience with log analysis tools like Splunk/Humio
  • Have experience on a trust and safety team and/or have worked closely with policy, content moderation, or security teams
  • Have a pragmatic approach to being on an operations team and can get in the weeds to get stuff done
  • Bonus: have experience with large language models, especially technical knowledge of machine learning or model deployment