Advisors

TSPA’s advisors are people who work within and adjacent to the trust and safety field. They provide us with deeper insight into current and emerging trends and concerns within the field, and ensure that we incorporate broad and diverse perspectives into our work. They provide expert information on particular subjects and on special initiatives. Meet our current advisors.

Cathryn Weems

Cathryn Weems

Cathryn Weems has been working in Trust & Safety for over 20 years (since before it was called Trust & Safety!), and has experience in legal operations, people management, content moderation, policy development, leadership, child safety, and global content regulation implementation. She’s worked at Yahoo!, YouTube/Google, and Dropbox, and most recently was at Twitter as a Senior Director, Legal Policy where she led a global team of >125 (plus vendors) handling a variety of legal requests and regulatory compliance matters. She is passionate about promoting TSPA (and the T&S industry in general), as well as networking and connecting people across the industry.

Christine Chen

Christine Chen has worked at the intersection of technology, policy, communications, and international affairs for more than two decades. She formerly led international and content policy communications at Facebook. Previously, she worked at Google, where her roles included leading public policy at YouTube, managing public policy strategy around free expression and international relations at Google, and leading privacy communications. Christine previously served on the board of directors at the Global Network Initiative. Prior to her career in Silicon Valley, she was a magazine journalist at Foreign Policy, Fortune, and Newsweek.

Dali Szostak

Dali Szostak

Dali Szostak is head of UX for Google Trust and Safety leading a team of designers, researchers and content strategists with the goal of shaping the user experience of all the products, systems and tools that counter abuse in Google platforms. She shaped her knowledge of digital security by speaking to (and being inspired by) countless activists, human rights defenders, journalists and experts during pivotal times in places like Ukraine, Zimbabwe, Kenya, Venezuela, and India, leading Google in the understanding of the people using our products that are at high risk. One such group of people are Content Moderators, individuals who are at the front lines of keeping our digital lives safe and are not immediately considered as key stakeholders in the way that technology is designed. Besides bringing a product and community design lens to Trust and Safety, she is also an educator, in particular supporting the growth of UX and digital safety in Latin America.

Daphne Keller

Daphne Keller is the Director of Platform Regulation at Stanford University’s Cyber Policy Center, where her work focuses on platform regulation and Internet users’ rights. She has published both academically and in mainstream media; testified and participated in legislative processes; and taught and lectured extensively. Her recent work focuses on legal protections for users’ free expression rights when state and private power intersect. Until 2015, Daphne was Associate General Counsel for Google, where she had primary responsibility for the company’s search products. She worked on groundbreaking intermediary Liability litigation and legislation around the world and counseled both overall product development and individual content takedown decisions.

Dina Hussein

Dina Hussein

Dina Hussein is a seasoned security and trust & safety expert focused on protecting platforms from abuse. She is a passionate advocate for ethical, human rights-focused policies in the tech industry. For over a decade, she has focused on the intersection of tech, countering violent extremist and elevating cross industry collaboration. Alongside her tech expertise Ms. Hussein’s subject matter expertise include research into processes of radicalisation, counterterrorism policy, and counterinsurgency operations. She currently serves as Meta’s Global Head of Policy Development and Expert Partnerships for Counter Terrorism and Dangerous Organisations.

Dona Bellow

Dona Bellow currently leads Policy & Responsible Innovation programs at IRL, a group messaging startup. She began her tech career at Google, where she led the policy evaluation and operationalization for legal content removal requests in the French market and managed the Child Safety program for Legal Online Operations, where she also became very involved in reviewer wellness and mental health. She went on to co-develop a program supporting product teams in mitigating abuse-related risks. She then joined the Community Policy team at Airbnb, collaborating closely with public policy teams on regulatory risks. Later, she worked on creating processes for trust and safety policy impact at Twitter; and most recently was a Responsible Innovation Manager at Meta. Dona is a proud native of Togo, and grew up in France.

Eric Davis

Eric Davis has safeguarded brands, platforms, and billions of users from bad ads, bad apps, and other machinations of bad actors across global policy, product, and engineering functions throughout his career. He was Senior Director for Product Management at NortonLifelock (previously Symantec), where his work included leading product requirements and implementation addressing GDPR and other privacy regulations globally. Previously, he worked for 13 years at Google, where he held product policy and public policy leadership roles. Eric also founded and led Google’s Android Security Investigations and Operations, Anti-Malvertising, and Trust and Safety teams. Earlier in his career, he was the charter International Product Manager for Trust & Safety at eBay.

Fadzai Madzingira

Fadzai Madzingira

Fadzai Madzingira is a Director in the Office of Ethical and Humane Use at Salesforce and is the EMEA Policy Lead for the team. Her team is responsible for writing policies to ensure Salesforce technologies are not being used to facilitate harm. Prior to this, she was the Global Hate Speech Content Policy Lead at Meta in the team responsible for writing the Community Standards. As a Rhodes Scholar from Zimbabwe, she read for a Bachelor of Civil Laws and a Master in Public Policy at Oxford University.

Fatima Alam

Fatima Alam is a Global Compliance Manager at Netflix, where she works on content classification across a Global Content Regulation and Standards Board portfolio. Fatima previously worked in various roles on the Trust and Safety and Public Policy teams at Google in the US and India, where she supported and led the research, development, and enforcement of product policies related to controversial and harmful online content. She is also interested in the ways policy decisions about science and technology are made by governments around the world and the various roles played therein by technology platforms, elected representatives, bureaucrats, subject matter experts, and civil society. Previously, she was also a Human Rights and Technology Fellow at the Harvard Carr Center.

James Gresham

James Gresham has spent a decade in data science and analytics focused on online services, social media, and the prevention of risk and harm. His experience covers the full data stack, from day to day operations to emergency alerts and from core data architecture to complex machine learning models. Currently James is a Senior Data Scientist at Shopify, having previously held positions at Meta and Alphabet. He also contributes educational material for the TSPA curriculum and is an advocate for increased focus on the prediction and prevention of criminal violence by Trust & Safety teams.

Jess Miers

Jess Miers

Jess Miers is Legal Advocacy Counsel at Chamber of Progress. As a lawyer and technologist, Jess primarily focuses on the intersection of law and the Internet. She is widely considered an expert on U.S. intermediary liability law and has written, spoken, and taught extensively about topics such as speech and Section 230, content moderation, intellectual property, and cyber crime. Before joining Chamber of Progress, Jess was at Google where she was a Senior Government Affairs & Public Policy Analyst. At Google, Jess oversaw the state and federal content policy portfolios. In addition to monitoring emerging U.S. content policy, Jess also worked closely with Google’s Litigation teams to influence the courts on key online speech issues, many of which are currently pending U.S. Supreme Court review.

Jerrel Peterson

Jerrel Peterson has spent his career leveraging research and strategic partnerships to address complex social and political issues. He is currently a Director on the Trust & Safety team at Spotify and previously served as the Head of Safety Policy for Twitter. He worked cross functionally to resolve dozens of high-profile cases weekly, drive the development of policies, processes, and tools, and communicate Twitter’s content policies clearly and objectively to internal and external stakeholders. Prior to Twitter, Jerrel worked in public policy at the state and federal level and also spent a few years providing direct services to various marginalized communities within the U.S.

Justin Paine

Justin Paine is the VP of Trust & Safety at Cloudflare. He leads a global team responsible for all Trust & Safety matters such as inbound abuse reporting, payment fraud, law enforcement requests for customer information, GDPR/CCPA privacy requests, and global policy creation in close cross-functional collaboration with the Public Policy, Product Policy, and Legal teams. Justin represents Cloudflare with multiple public and private sector organizations such as CISA, JCDC, Comm-ISAC, NCFTA, and NCCIC. Prior to this role, he served as Cloudflare’s first head of customer support in the early days of the company.

Kaitlin Sullivan

Kaitlin Sullivan

Kaitlin Sullivan is a Director of Content Policy at Meta, where she leads an international team that develops and scales policies to address account-level and behavioral abuse. She partners with Product and Operations teams to set company strategy for effective, transparent, and principled content moderation. Kaitlin regularly speaks on these issues at universities and conferences, including the Conference on Crimes Against Women, Content Moderation at Scale, and the Internet Governance Forum. Prior to joining Facebook, Kaitlin worked as a rape crisis advocate and educator at the YWCA of Silicon Valley.

Laura Dunne

Laura Dunne

Laura Dunne has spent the past 12 years working in T&S at a range of organisations, from large social media multinationals to UGC streaming startups, all within Europe. Laura has expertise across a wide variety of safety issues, most notably in harm, privacy & freedom of expression. While at Twitter, Laura was the global policy lead for Privacy, developed the company-wide Harm Framework and worked extensively on communicating complex policy issues, including within Twitter’s industry-leading Transparency Report. Prior to this, Laura helped found T&S at SoundCloud, launching their content & advertising policies and public policy strategy. Laura continues her work at the intersection of technology, policy and expression in her current role at Spotify, where she leads global policy development for music and audiobooks.

Neha Nair

Neha Nair

Neha Vijay is the Director of Global Trust & Safety at Radix Web Solutions. Her core work is to identify and mitigate domain name abuse. In the last ten years of her career, she has worked in diverse security segments, including countering terrorism/violent extremism, preventing online radicalization, intelligence, investigations, and crisis management. Before Radix, Neha worked with Indian Law Enforcement, the Wikimedia Foundation, and Netflix, donning several hats within security, T&S, and intel-focused roles. For the last few years, she has worked closely with law enforcement, governments, and think tanks to bridge gaps between public and private entities working towards establishing peace. When not working, she loves to read, travel, and laugh with family and friends.

Nicole Wong

Nicole Wong specializes in assisting high-growth technology companies to develop international privacy, content, and regulatory strategies. She previously served as Deputy U.S. Chief Technology Officer in the Obama Administration, where she focused on internet, privacy, and innovation policy. Prior to her time in government, Nicole was Google’s Vice President and Deputy General Counsel, and Twitter’s Legal Director for Products. She frequently speaks on issues related to law and technology, including five appearances before the U.S. Congress. Nicole chairs the board of Friends of Global Voices, and also sits on the boards of WITNESS, the Mozilla Foundation, and The Markup. Nicole currently serves as co-chair of the Digital Freedom Forum, and as an advisor to the AI Now Institute, the Alliance for Securing Democracy, Luminate, Refactor Capital, and the Albright Stonebridge Group.

Pinah Shah

Pia Shah

Pia Shah’s work in tech policy centers on two main themes: the impacts of technology on individuals and society & racial and social justice. Pia is currently the Global Head of Policy for Trust & Safety at Amazon Web Services, leading the strategy for company policy on abusive online content.  Previously, she was a Sr. Behavioral Engineer at Robinhood, where her work focused on countering misinformation & disinformation, content moderation, and “Safety by Design”, ensuring transparency and safety were built into products & features so as to avoid unintended consequences. Prior to Robinhood, Pia helped build the Privacy team at Lyft, crafting data ethics practices, safeguarding users’ private data, and ensuring compliance with CCPA and GDPR. Under the Obama Administration, she was appointed to advise at the Department of Homeland Security’s Science & Technology Directorate, working on policies and programs to help safeguard the nation, including developing partnerships with foreign nation partners on shared national security challenges, including on cybersecurity, as well as crafting policy on protecting the National Capital Region from Unmanned Aerial Vehicles, and working with partners like the NFL in obtaining SAFETY Act protections. She is a proud alum of Howard University School of Law and UC Berkeley.

Siva Raghava

Siva Raghava

Siva is a Senior Director responsible for TaskUs’ Trust & Safety Practice. Having worked in Trust & Safety Policy, Operations, Product, Technology and Outsourcing divisions for Content Moderation & Marketplace Risk, he brings a very unique and diversified background. As an industry practitioner, Siva is passionate about creating safer online communities by scaling trust & safety processes from scratch. Before joining TaskUs, Siva worked at MindGeek (owner of the PornHub+ website), Amazon, Dell, Accenture, and Genpact and was a banker by profession prior to his career in Trust & Safety.

Treesa Ann Jose

Treesa Ann Jose

Treesa Ann Jose has worked at the intersection of technology and policy for over a decade. She currently works as a Senior Policy Manager at Pinterest. Previously, she worked as Product and Content Policy Manager at Facebook (now Meta). Prior to that, she worked at Google where she set up and scaled Operations teams and Content Policy strategy for the Next Billion User offerings. Her Policy and Product expertise spans across User Generated Content, Elections Integrity, Business Content including integrity strategy for Augmented and Virtual Reality applications. Beyond her core Tech Policy focus, Treesa has worked with international and regional NGOs like World Economic Forum on community development and women empowerment efforts throughout her career in Singapore, India and the United States.

Former Advisors

Alex Feerst | Former GC and Head of T&S, Medium.com

Tim Lordan | Executive Director of Internet Education Foundation (IEF)

Micah Shaffer | Former Director of Public Policy, Snap

Role and Expectations of Advisors

TSPA’s advisors inform and guide our decision-making on various programming, activities, and relevant current issues. Advisors’ responsibilities are to: (1) be a trusted source of insight and information to the trust and safety field, (2) identify and connect TSPA to experts who can offer advice on organizational matters, (3) expand TSPA board and staff’s network with other communities, organizations, and individuals, and (4) boost the work of TSPA in the industry and beyond.

Advisors are expected to be active participants. At minimum, advisors are requested to:

  • Commit to at least a 1-year term
  • Participate in advisors meetings (1-2 times a year)
  • Provide advice or support to TSPA throughout the year
  • Support and participate in TSPA programming and activities 
  • Keep confidential non-public information shared by TSPA, TSF, or fellow advisors 

Advisors will not be compensated for their time or advice. Advisors do not have the authority to vote on organizational matters unless specifically requested by TSPA staff or board. Advisors are not representatives of, nor do they speak on behalf of, their current employers. Advisors needs to be (or become) members of TSPA.

How to Become An Advisor

TSPA does a call for advisors when new advisors are needed. During this process, we will (1) select a certain number of current advisors to continue to serve, (2) identify specific gaps within the advisors group and do a call for advisors, and (3) open the application process to the broader public.

Process for New Advisors

  1. Individuals interested in becoming an advisor should fill out an application.
  2. TSPA staff may get in touch with applicants if more information is required.
  3. TSPA staff will review the applications, and determine which applicant to select. The number of applicants accepted will be based on the number of open spots. 

Process for Nominated Advisors

Throughout the year, TSPA staff, board, and current advisors may nominate a candidate to join as an advisor to meet a gap in expertise or perspective. TSPA staff will review the nomination and determine whether to accept the candidate as a new advisor.

Process for Advisors Invited to Rejoin

TSPA staff will invite advisors to continue their service when we do a call for new advisors. This invitation may be decided based on an advisor’s level of participation, their involvement in current or planned programming or activities, or because their perspective and knowledge is deemed essential. Current advisors who are asked to rejoin will be decided by staff. 

Selection Criteria

TSPA advisors must have (1) deep knowledge of and experience with trust and safety issues, and/or (2) expertise in one or more of the following areas as they relate to trust and safety: content/behavior policy, operations (including content review), internet regulation (US and international), human rights, product development, public policy and communications, labor relations, or people management. 

Additional prioritization criteria include: 

  • Candidates who have specific expertise or skills that will benefit current or planned programming or activity and organizational development. 
  • Candidates who reflect the diversity of our membership in regards to:
    • The various types of roles within trust and safety work 
    • Experience at employers or organizations of different sizes and sectors
    • Geographical location
    • Representing different perspectives and backgrounds

Term of Service

Advisors will serve starting from when they have been selected either through the call for advisors selection process or through the nomination process.

Advisors may be asked to withdraw as an advisor if they develop a conflict of interest, as determined by TSPA staff and boards. An advisor can also request to resign at any time.

Apply to Become an Advisor

Applications for the latest round of Advisors are now closed, and decision notifications will be sent out by March 31, 2023.