Trust & Safety Coffee Hours are one-on-one conversations designed to provide mentorship to current and aspiring trust and safety professionals. Hosts are volunteering their time and expertise to offer advice to job seekers and T&S professionals. Hosts are not conducting formal interviews, offering referrals, providing product feedback, participating in interviews and/or user research, and/or acting on behalf of their employers or TSPA. If you schedule time and you’re not a current or aspiring T&S professional, a host reserves the right to cancel the appointment without notice.
Here’s how it works:
- Search for a host based on what you want to talk about. Sign-up using the host’s “Schedule a chat” link.
- Be respectful of our hosts and their time. Remember the TSPA Code of Conduct applies to all resources, spaces, and programming–even if you’re not a TSPA member.
- Because these one-on-one conversations are brief, come prepared with questions and have a specific goal in mind. If you’re just getting started in T&S and want to learn more, we recommend reviewing Careers in T&S and the T&S Curriculum before scheduling.
If you’re a TSPA member looking for a way to help other folks on their journey in T&S, sign up to be a T&S Coffee Hours host!

Rumana Ahmed
Rumana is currently a Content Policy Associate Manager at Meta, based in Bay Area, California. She has 7+ years of experience in Trust & Safety, across Operations, Abuse Specialization, and Policy Development & Enforcement. She has had the opportunity to be both, and IC and a manager – working on building teams from ground up. Having worked in APAC, EMEA, and now in the NA region, she has developed a deep understanding of both, the global landscape and the region-specific nuances of T&S issues. In her current role, she focuses on building and managing policy development & launch processes; with the goal to continually improve efficiency, legitimacy, and transparency.

Ryan Cerda
Residing in Austin, Texas, Ryan has worked (only ever fully remote) with a variety of notable tech companies for the past decade. He started his T&S career at Indeed.com as a Search Quality Moderator. He then moved on to an organization called Sucuri (which was eventually acquired by GoDaddy), where he gained experience in the GoDaddy Product Security Group department for five years. Afterwards, Ryan joined Open Technologies, an anonymous messaging mobile app, as their Community & Moderation Coordinator. It was at Open that he not only gained experience as a content moderation manager, but also as an interim product manager for Open’s internal moderation tools. Most recently, Ryan has been a wearer-of-multiple-hats at Yik Yak, an anonymous messaging mobile app mainly used by college students. He’s the acting Content Moderation Manager, but their small, lean startup has him doing much more. From internal product development, policy making, and anything else in between, Ryan’s focused on bringing Yik Yak’s vision and mission to the masses while ensuring their users can express themselves within a healthy online community. If Ryan isn’t in Austin, Texas (CST), he can usually be found on the east coast (EST). He’s always happy to connect with anyone interested in talking trust & safety!

Sarah Nasr
Sarah Nasr is a Trust and Safety Hate Speech Project Manager working at Meta (Facebook), and based in Dublin, Ireland. She’s been at Meta for more than three years months now. Sarah initially joined as a Market Specialist on the MENA market, where she focused on risk mitigation, and later on progressed to the Project Manager role. Prior to Meta, Sarah was working in the humanitarian sector and NGOs focused on health, development and education. She used to be a Project Coordinator, coordinating various programs on access to education for refugees based in Lebanon. Her background is in international relations; she have a Masters degree in International Studies and Diplomacy at the School or Oriental and African Studies. Sarah is fluent in English, French, and Arabic. Unlike what some might perceive as a big change from humanitarian to tech, Sarah thinks the two sectors are very similar at the core when it comes to T&S. She notes, “Our mission is to protect users and enable expression, while protecting human rights. We draft policies, ensure operational feasibility, design implementation processes and manage large projects with a large network of cross-functional stakeholders such as Policy, engineers and product.” Sarah’s areas of expertise are policy development and implementation, communication, and project management.

Scott Stern
Scott is currently a manager on the Risk Intelligence team at Meta, which he joined in January 2020. RI has a wide remit in the T&S space ranging from understanding the overall risk landscape facing Meta’s users on their platforms to understanding the motivations and root causes of bad actors or behaviors. His team has experience with virtual every abuse type including child safety, terrorism, misinformation, election delegitimization, fraud, and harassment. Prior to Meta, Scott worked at the American Marketing Association leading innovation strategy, product management, research, and analytics. Before that, he spent a number of years at the Central Intelligence Agency working on counterterrorism. Scott has worked with a lot of former government people transitioning to the private sector, so he’s happy to talk to anyone exploring T&S as a career, navigating career changes, managing people, or anything else on your mind. He’s based in the Bay Area.

Siva Raghavva
Siva has experience in the following areas: Content Policy Formulation & Enforcement, Risk and Fraud Detection, Strategy & Consulting for Trust & Safety, Professional Coaching, and Career Counseling.

Sugata Basu
Sugata Basu led the Ops teams with T&S at Meta and then Change Delivery and Transformation teams at Twitter. She lives in the Bay Area and has expertise in strategy and execution, people management, program management and operations.

Tolga Bag
Tolga is currently based in Dublin and works in T&S at Meta as a people manager in the Risk Management & Intelligence team. He has more than four years of T&S experience gained through complex investigations, global project management and leadership of diverse & high-performing teams to keep the platforms & community safe. Prior to Meta he worked for a global public affairs agency for more than four years, conducting external crisis management, media relations and strategic communications campaigns for clients. Earlier in his career he worked in think tank and political risk sectors.

Vaishnavi J
Vaishnavi is Head of Youth Wellbeing at Meta, working with internal policy and product teams as well as external experts to ensure Meta continues to build safe and healthy experiences for young people. At Meta, Vaishnavi’s team focuses on issues impacting young people, including issues such as bullying & harassment, body image, teen mental health, and teen wellbeing. She was previously Instagram’s Head of Safety & Wellbeing where she focussed on keeping the Instagram community safe online. Vaishnavi previously led Twitter’s video safety policy efforts, was Twitter’s first head of safety policy in the Asia-Pacific region, and began her career in tech policy working on online child safety and privacy policy issues at Google.
TSPA Members Only

Vejeps Ephi Kingsly
Ephi works in the intersection of business strategy, organizational capability development, and practice development of Trust and Safety for emerging technologies. He has worked in various roles leading client engagement, strategy, operations, quality, and training spanning 10 years across small, medium, and large organizations. He currently leads the strategy and innovation of trust and safety practices at Genpact. He is a Trust and Safety expert and is passionate about safe internet. He has built and manages a Trust & Safety learning channel at Genpact which has been adopted by over 10,000 learners and has enabled over 5,000 new learners to transition into a trust and safety roles across functions. Ephi brings a unique view of business and tech from his life. He speaks five languages and has lived across various parts of South Asia and Southeast Asia. His domains of expertise include: social media, e-commerce, gaming, media, metaverse, user content platforms, AI operations, product operations, advertising tech, self-serve and programmatic ads, SaaS, and e-healthcare.

Vijai Radhakrishna
Vijai is a people leader in the Intelligence team for Trust and Safety at Google. He has been working over half a decade across different teams across the Abuse and Trust and Safety horizontals from analyst to product manager to people leader. With a background in Data Sciences and Product Management from UC Berkeley, his focus is on using technological interventions to structure and attempt to solve abuse at scale. He is currently leading Intelligence Collection efforts to keep Google products informed of upcoming risks and identify creative solutions to maximize impact.
When he’s not working, you can find him hiking, exploring food or looking up the new IoT or AI trend on the rise. He’s looking forward to getting to know all of you and supporting you on your journey!

Vincent Courson
Vincent started in the T&S world as an anti-abuse analyst for Google’s Organic Search product, learning the ropes of policies, tools, trainings, enforcement and other elements of traditional T&S workflows. He then spent some years doing external communication of the abuse guidelines and best practices for website owners, which allowed him to hone public-speaking, program management, and user feedback skills. Since 2020, Vincent has been focusing on T&S Partnerships, working with ecosystem representatives (private companies, NGOs, LEAs, and more) to protect users across the whole tech landscape.

Yu-Lan Scholliers
Yu-Lan is currently Head of Product at Checkstep – a B2B SaaS startup focusing on building AI/Content Moderation tooling to help platforms with harmful content. Previously, she was in Product Data Science at Meta for five years, across different harm types (for measurement) but also specifically focusing on: Dangerous Organisations, Suicide & Self-Injury, Non-consensual Intimate Imagery, working with some amazing people across tech, policy and operations, focus on harm-specific insights but also transparency reporting, data privacy and compliance. She’s so grateful to be part of T&S, making the world a bit safer every day and she’s keen to meet this community, learn from you, and share what knowledge she has!