Meet the Women Shaping Responsible AI - From Digital Confidant to Data-Driven Change: Floretta Mayerson's Journey in AI for Social Impact
Floretta Mayerson, co-founder of Violetta, discusses her award-winning work using AI to combat gender-based violence and her evolution as a leader in responsible AI development.
Violetta is an AI-powered platform that has quietly transformed how people seek help for gender-based violence in Mexico. With just one WhatsApp message, users can access support that has reached thousands of anonymous individuals over the past year, a testament to both the urgent need and the power of thoughtfully designed AI solutions.
Behind this innovation is Floretta Mayerson, co-founder of Violetta alongside Sara Kalach, Sasha Glatt, and Carla Pilgram. Launched during the COVID-19 lockdown, Violetta emerged from a critical understanding: shame, fear of judgment, and lack of supportive family environments prevent many women from taking the first step to escape cycles of violence.
A 2024 She Shapes AI Awards Finalist
As a finalist in the AI & Learning category and the Special Recognition by the Jury of the 2024 She Shapes AI Awards, Floretta's work represents a new frontier in applying artificial intelligence to address structural social problems. Violetta serves as an AI-powered platform that helps people build healthy relationships and identify gender-based violence risk in its earliest stages, specifically trained to understand the contextual nuances of relationships in Mexico and Latin America.
"Everyone can talk to Violetta," Floretta explains. The platform acts as a digital confidant where users can discuss everything they're feeling and experiencing in their relationships. The AI is trained to detect risk and guide users toward the most appropriate help, a crucial intervention in a world where one in three women globally experience gender-based violence.
The Impact of Recognition
For Floretta, being named a She Shapes AI award winner brought both professional and deeply personal rewards. "Professionally, it expanded my network significantly," she reflects. Though unable to attend the ceremony in person, virtual connections led to meaningful relationships with researchers working on mental health applications and AI-powered risk prevention.
The recognition also resulted in publication opportunities and increased visibility in professional networks where responsible AI conversations are taking place. But perhaps more importantly, it provided something invaluable: community.
"It has been really comforting to find a network and community that's concerned and acting upon things that don't let me sleep at night," Floretta shares. "It gives me this feeling of 'I'm not the only one, and I'm not crazy.' There are so many people doing amazing work, and it gives me a sense that not everything is lost."
Navigating Growth and Challenges
The year following her award recognition brought significant challenges, particularly around funding and long-term planning. Rather than being deterred, Floretta and her team made a strategic decision to look inward, focusing on strengthening their foundational processes.
"During the first years, we grew very rapidly, and to sustain that growth, we must ensure that our processes are stable and correct," she explains. This introspective period involved revisiting and updating AI models that were initially developed five years ago, replacing obsolete approaches with more efficient, cost-effective solutions.
A key breakthrough has been developing better understanding of how their collected data creates value – not just for users and content creation, but for other stakeholders who can drive systemic change. By analyzing users' risk evolution over time, Violetta can now facilitate early interventions from other organizations and demonstrate concrete impacts on workplace productivity, absenteeism, and employee turnover related to gender-based violence.
Embracing AI Leadership
One of the most significant transformations in Floretta's journey has been embracing her role as an AI leader, a identity she didn't initially claim for herself.
"I always considered myself more in the content, communication, and impact world," she admits. "Putting on the hat of an AI leader has been huge for me." Despite not having a traditional technical background, Floretta recognized that her perspective brings essential insights that highly technical profiles might overlook.
This realization has opened doors to forums and events focused on responsible AI, where she advocates for gender-sensitive approaches to technology development. "It breaks this glass ceiling," she notes. "I never thought I would be in these spaces with such high technical profiles, and suddenly I'm there putting gender-based violence on the table."
Her approach to leadership embraces continuous learning while making peace with not having all the answers in a rapidly evolving field. "I don't think we've ever seen an industry moving this fast," she observes. "It's okay not to keep up with everything – slowly and steadily taking the pieces that are relevant for the specific work I'm doing."
The Stakes of Responsible AI
Floretta's perspective on responsible AI is uncompromising: "The stakes are too high for not putting responsible AI at the center. If we don't do it, we're doomed forever."
Her experience working with major tech companies in Silicon Valley reinforced this conviction. "After I talked about Violetta, at least one person from each company – Google, Microsoft, Bloomberg – approached us and said their story would have been very different if they had access to something like this."
This pattern, repeated globally, demonstrates that gender-based violence transcends cultural and geographic boundaries, making responsible AI development not just a nice-to-have, but a critical necessity
Rethinking AI Accessibility
One of the most thought-provoking insights from Floretta's recent work involves questioning conventional wisdom about AI accessibility. While the tech industry typically focuses on making AI available to everyone, she argues for more nuanced considerations.
"Why would everyone have AI available?" she asks, pointing to concerns about children and teenagers interacting with AI, or people in positions of power exploiting AI tools and violating privacy. "We should be asking ourselves about accessibility: for what purpose, with what intention?"
This perspective extends to her own work. When asked to create versions of Violetta for children and teenagers, ethical questions arise: Should young people be encouraged to discuss feelings and relationships with AI rather than connecting with physical-world support networks?
"We shouldn't only analyze these questions in one direction," Floretta explains, "but rather see them as a prism with different layers and faces, asking the right questions before designing or developing."
Advice for AI Leaders
For other leaders interested in using AI to address real-world problems, Floretta's advice is clear: start with the problem, not the technology.
"AI or any technology is not the end goal, but rather the vehicle," she emphasizes. "You should immerse yourself in the problem and understand its complexities, intersections, who it affects, how, why, and when. Only after you have that full view should you find the right tools to address the problem."
Equally important is understanding limitations alongside capabilities. "Having limitations clearly defined allows us to set the boundaries of responsibility when implementing AI to these problems," she notes. "Understanding limitations gives you much more leverage on what you can do."
A Vision for the Future
Looking ahead, Floretta's excitement centers on AI's potential to transform how society addresses structural inequalities. Her dream is "a world where no one is left out of the access or possibility to have healthy, happy, successful, violence-free relationships."
She envisions AI and technology reaching places that are traditionally hard to access, helping people create new structures and dynamics. Most compelling is her vision of predictive AI that can detect risks before they escalate, fundamentally preventing harmful situations and decreasing the consequences of gender-based violence in society.
"I see gender-based violence as this hidden structure that sustains many dynamics and the world as we know it," Floretta reflects. "AI can really sneak into those places and grab by the hand everyone who needs it, helping them create new realities."
The Broader Impact
Floretta's work with Violetta represents more than just technological innovation, it demonstrates how AI can be thoughtfully designed to address some of society's most persistent challenges. Her journey from content creator to AI leader illustrates the importance of diverse perspectives in shaping responsible AI development.
As the AI industry continues its rapid evolution, voices like Floretta's become increasingly critical. Her insistence on putting responsibility at the center, questioning assumptions about accessibility, and maintaining focus on real-world problem-solving offers a blueprint for others seeking to use AI for social impact.
Through Violetta, Floretta and her team have created more than a chatbot—they've built a bridge between technology and human dignity, proving that when AI is developed with empathy, cultural understanding, and ethical consideration, it can become a powerful force for social change.
The thousands of users who have found support through Violetta represent not just statistics, but individual stories of hope, healing, and the possibility of violence-free relationships. In a world where technology often feels impersonal and extractive, Floretta's work stands as a reminder of AI's potential to serve humanity's deepest needs with compassion and respect.
Floretta Mayerson was a finalist in the AI & Learning category of the 2024 She Shapes AI Awards. Learn more about Violetta and their work at Hola Soy Violetta.