Hi, I’m Rhett

i help tech companies with shaping safer & happier communities and reducing their risk.

a gold puzzle piece layered on top of a blue textured scrap

platform risk strategy is like a puzzle

platforms that allow users to share content or have 1:1 or group interactions need to maintain the following critical operations infrastructure—both to follow laws that apply to them, and to keep users happy:

  • a defined trust & safety strategy, including foundational user policies, internal and user tools, and managing moderators and content moderation processes

  • ongoing user support, like managing customer service platforms or teams, or user education / faqs

  • fulfilling any business compliance requirements in regions worldwide, including keeping policies up-to-date, fulfilling privacy and transparency requirements, and managing intellectual property

platforms often face challenges when trying to prioritize these operational requirements while simultaneously racing toward product and growth goals, and other competing priorities.

a flashing warning sign gif layered on top of a half sheet of graph paper

assembling the pieces

every company has its own specific risks and business goals, from ideal feature ship speed to how much the team wants to invest in user experience. these goals will change over time.

every platform community needs its own individualized, specifically built layers of support to be safe and happy, which also evolve over time as the community grows.

i devise policies, processes, and tools tailor-fit to a platform—based on its size, its risk tolerance, its business goals, and its community needs—and put the puzzle all together.

an image of an eye surrounded by stylized star images, a thumbs up, and an "x" representing a content moderation decision

how i got started in trust & safety

like many people who entered the trust & safety field in the 2010s—when “moderator” was still a somewhat mysterious term outside of it—my path happened by chance. i was a voracious reader and writer, and i’d studied english in college assuming i’d be a teacher and hopefully, eventually, an author. one brief stint of tutoring and many bad poems later, i was ready for anything else, and landed my very first “real” job as a research assistant to a t&s executive for a kids’ gaming company.

i worked my way up in the cx department—first stumbling through phone support for subscriptions, then moving into moderating posts and conversations, and onward to t&s management as a shift lead, along with as many content and marketing side projects as i wanted. i loved my work. first, i was getting to do all kinds of different things on any given day.

more importantly, i was getting to help other people to read and write and make friends with one another online (the things i liked to do best)—cushioned by the extra layers of safety that our team was providing.

an image of a lightbulb inside a circle in front of a square pattern

trust & safety flipped on a light bulb in my brain—a drive to shape every community to be safer and happier.

between the evolving tactics of bad actors, the ways communities grow and evolve over time, and evolving legal requirements, no community will ever reach a perfect state of safety and happiness. constant adaptation is necessary.

so that drive is unquenchable. and that light bulb never turns off. and that’s half the fun.

an image of a verification checkmark

what i’ve learned from building trust & safety programs

  • how to scale moderation and support for a growing community. i’ve supported operations for communities that rapidly scaled to millions of users, and ops teams that grew from one moderator or agent (me) to large team. i’m confident in my “formula” for building out human operations at this point, between modeling, adding automated tools into the mix with support from product partners, and always maintaining as much flexibility as possible to account for change.

  • build vs buy? it really depends. there are no easy answers when trying to moderate content (sorry to the automation die-hard believers, but you’ll pretty much always still need a human touch—at least for the present moment). i aim to balance the best fit tools with the lowest cost & effort to implement. i constantly think about what the next iteration of a platform’s tools should look like to prepare to scale, and keep a close pulse on vendor platforms to understand the latest trends in t&s (hello, llm-based moderation!)

  • conquering the technological “wild wild west.” after crowdsourcing knowledge for my very first csam handling guide, strategizing around the best ways to try and moderate real-time audio, trying to figure out (and then explain to other people) what exactly a person owns when they buy an nft—and countless other fun anecdotes of operating with no previous playbook—i’ve conquered any fear of being the “pioneer” on an emerging challenge. there’s nothing that can’t be navigated with the support of internal and external partners and legal counsel, and the trust & safety community is wonderfully willing to knowledge-share.

a process arrow on top of a circular collage magazine snippet

how i learned to scale myself — beyond t&s

mid-way through my career, i began broadening my scope beyond just trust & safety and support operations to also being a compliance program manager and legal operations director. this initially happened by chance because of the “wear-many-hats” nature of startup jobs, but once i began to understand the legal foundations of t&s and got a taste of other realms of compliance, i wanted (and asked for) a lot more.

as a company simultaneously navigates protecting its platform and protecting its users, i work with legal counsel and internal stakeholder teams to be the “bridge” for it all.

i feel incredibly lucky to be exploring each of my current interests within compliance in my current job, which involves a combination of legal operations, trust & safety, product compliance, and user support.

i continue to love the day-to-day challenges of trust & safety. i’ve simultaneously started to branch out further into compliance as far as i can without a jd.

my current projects span:

  • trust & safety strategy

  • legal operations

  • privacy, ip, and region-specific compliance projects, including gdpr compliance and digital services act (dsa) implementation

  • customer experience

an image of a lock on top of a layered set of circles, the innermost circle being a snippet of a legal text

some of the compliance projects i’ve worked on so far include

  • building global and regional legal operations playbooks

  • managing law enforcement and government agency requests and engagement

  • managing ngo partnerships and associated protocols, including trusted flagger programs and crisis protocols

  • working with data teams to maintain transparency data, and producing transparency reports according to applicable requirements and industry best practices

  • ensuring continuity with privacy requirements, deletion practices, and user data requests across rapidly changing products

  • executing on multi-year compliance projects and implementation plans for emerging regulation (including the eu’s digital services act and india’s it rules)—including legal interpretation with counsel, initial roadmap planning, project management across stakeholder groups throughout implementation, and managing ongoing (daily, monthly, or annual) operational and transparency requirements

  • drafting site policies such as terms of service, privacy policy, and community guidelines

an image of a courtroom gavel on top of a black many sided polygon

what’s next for me?

i love what i’m doing right now. in the future, i expect i’ll be

1) continuing to grow as a trust & safety and compliance leader, and exploring my privacy and ip interests more (ex. i’m pursuing a cipp cert in 2024)

2) maybe eventually going to law school, then back to tech compliance—but not sure yet. i’m honestly having too much fun to hit pause on my current work.

(p.s. if you made this same career choice and have thoughts on the pros / cons and challenges, i’d love to hear from you)

what’s next for trust & safety?

a gif of a vintage eye cutout on top of a scrap of brown paper, with check mark and x mark gifs representing content moderation

our need for trust & safety and compliance strategies won’t be going anywhere anytime soon.

new risks for platforms from both ai-generated content and ai-related legal requirements are already emerging, and our industry is already fast adapting to a whole new world of policies, moderation processes, and new tooling.

we’ll need all the help we can get—meaning more strategic minds ready to tackle operational problems, and to constantly adapt systems and processes to be more efficient, increasingly user-friendly, and all-around safer.

buckle up, and let’s de-risk together.