The Power of AI Done Well: Making possible the impossible - the impetus of personal values

In our last piece, we explored what it truly means for AI to be responsible, a concept our Top 33  define not just through technical safeguards, but also through values, such as embedding care, empathy, and accountability into every stage of design. But where do those values come from?

This third installation of our insights series takes us a step deeper into the personal. Because long before a model is trained or a product is launched, there’s a human behind it making choices. And those choices are deeply informed by lived experiences, moral convictions, and the courage to do things differently.

What Do a Fair AI and a Courageous Blender Have in Common?

Have you ever heard of an inclusive GPS? An empathetic washing machine? Or a courageous blender? A fair AI or one that listens to people’s needs, however, did not seem too far-fetched to our She Shapes AI community members, when we asked them which personal values guided their approach in developing or using AI responsibly.

Their anthropomorphisation of AI is neither new nor unique: language around AI technologies often uses terms one would intuitively associate with sentient beings, not machines. Even the UN’s AI Advisory Board, in its Governance of AI for Humanity interim report from 2023, regularly slips into a choice of vocabulary that seems to refer to AI as an actor endowed with agency, capable of creating harm and bearing responsibility. This tendency appears quite natural considering the original goal of AI was to build an artificial version of human intelligence: some models, such as Generative AI, even mimic our human learning patterns to be able to produce new content. Following this line, doesn’t it sound reasonable to be wary of a machine that is capable of decision-making, having conversations, thinking logically, and even making art? The almighty umbrella term AI is entirely reminiscent of nightmarish figures such as the supercomputer V.I.KI. who fails to take over the world in the 2004 blockbuster I, Robot. 

However, it is also reminiscent of the way the term “the Internet” is being used by generations who have grown up without it, invariably using the term to refer to more or less anything that might take place on a computer screen. And in fact, this latter comparison is more useful to understanding AI than the former: AI is neither more nor less than a range of tools, from early Large Language Models to recently developed “agentic” AI, some of which are capable of producing outputs eerily similar to those a human would produce.

AI Reflects Us for Better or Worse

Simply put, what the anthropomorphic features of AI tell us about AI is not that AI is capable of being human, or even superhuman, and will eventually eradicate us. What it does tell us however is that: (1) AI technologies are capable of carrying, replicating, and therefore transmitting human values and consequently (2) the role our personal values have to play in the development of AI technologies is crucial. The bottom line is: AI is not a self-defining, cognizant, emotional being capable of choosing the values it wants for itself, rather it is always a product of humans and we have the power to shape it.

This power comes in many forms and that includes the personal values we implant via our data into the AI tools we develop. Predictive algorithms, for example, base their outputs on previously encountered cases via our data and are therefore fully informed by the outcomes we, as humans, have accepted as valid before. If not carefully considered, the set of historically accepted outcomes we feed to algorithms can produce the alarming and sometimes terrifying effects we have already been hearing about in the news. 

One of those effects is exclusion. To name one example, AI-based decision-making algorithms have been seen to reject job applications from qualified individuals on the basis of their gender or ethnicity, reflecting and emphasising the already all-too-present biases in the professional world. 

We’ve also seen the exacerbation of harmful stereotypes. For instance, AI chatbots intended to help physicians with providing care for their patients seem to have integrated racist hypotheses that Black people suffer less pain, and have consequently advised for lower doses of pain relief. 

Another dangerous effect of poorly designed AI systems is their inability to understand context especially in matters that affect human lives. One tragic example is Spain’s VioGen algorithm, which was used to assess the risk of domestic violence. Due to its flawed design, the system wrongly classified the risk levels of several women contributing to decisions that failed to protect them. As a result, 71 women were killed. These devastating losses serve as a stark reminder that when algorithms are built without nuance, empathy or proper oversight, the consequences can be fatal. 

From Harm to Hope: Scaling Values, Shaping Futures

Beyond these harmful examples of AI, the vast and speedy diffusion of AI tools and AI-generated outputs to users across the world carries in its wake a variety of negative effects, from amplifying discrimination to the spreading of outright violent and illegal content. These are also products of our negligence -and sometimes even malevolence- towards the values we decide to use as guidelines and guardrails, or not. One of our community members, Pamposh Raina, has made it her mission to combat the nefarious spread of AI-generated misinformation and currently leads the Deepfake Analysis Unit of the Delhi-based Misinformation Combat Alliance. Her important work, analysing, identifying, and denouncing fake content online, exposes the extent and impact of such content, as well as the harm it may cause. 

Echoing these efforts with a very unusual majority vote, the US Congress passed a bill on April 28th 2025 aimed at eradicating AI-generated non-consensual porn, one of the rare regulatory bills targeting online content that has seen virtually no backlash. It is encouraging to see that serious actions are being taken to counter these negative effects. But once again, we have shaped the tools that have made these effects possible. And though we will always have to fight against malicious users, we also have the power to shape this negative influence into a positive one.

Courage as a Guiding Force

In fact, our community members and She Shapes AI Award winners offer us a wonderful variety of examples showing just that: their uses of AI not only serve the greater good, they also propagate positive influence. Take for example Floretta Mayerson’s app Violeta. Violeta offers a platform for women victims of domestic violence in Mexico, allowing them to anonymously and safely share their experiences and provide them with the right resources and/or contacts to address their suffering. This platform not only serves women directly by helping them through extremely difficult situations, it also carries positive influence generally, spreading the values of solidarity, empathy, and care. Similarly, NightOwlGPT, developed by our AI & Learning award winner, Anna Mae Yu Lamentillo, does much more than preserving endangered languages and including marginalised communities in the digital world (though that’s already a lot!). By shaping her technology with a data representative of the communities it serves, and by adapting the models to their situation of marginalisation, NightOwlGPT also carries the values of fairness and inclusivity, normalising the idea that all populations, big or small, can have equal access and representation in the digital sphere.

Therein lies the Power of AI Done Well. By scaling these endeavors globally, we not only expand the positive impact of these many bold and necessary initiatives, we also allow for the positive personal values that have shaped them to, in turn, shape us. A world in which a majority of women grow up with the knowledge that there is easily accessible, safe, and tailored support against domestic violence is a world in which women can feel more heard, and know they are not alone. A world in which all minority-language communities have adapted tools for e-Learning and digital insertion fosters empowerment and visibility for those communities, as much as inclusiveness on both sides of those bridges being built. More broadly speaking, awareness of how our personal values shape the machines we build is crucial - and choosing to act on it requires courage. The courage for us to choose careful construction over speed, to face ourselves and the long trail of biases we carry, and to ask ourselves the right questions. As one of our community members, Dr. Luise Frohberg, Founder and CEO of Taara Quest, put it: “We must be accountable for the impact, even when it’s uncomfortable.”

When heading into the AI space as a developer or an entrepreneur, let us therefore follow the lead of our community members and She Shapes AI Award winners, who shared with us their values of tenacity, adaptability, humility, and empathy. Let us take the time to continually reflect on our own values and how they can shape the technologies we build. And let us have the courage to be driven, as our AI & Nature winner, Diana Gutierrez, beautifully expressed, “by a solid faith in humanity and our capacity to make possible the impossible”.

What’s Next

Personal values give AI its moral compass but they’re only the beginning. In our next installation, we’ll explore how those values come to life in leadership: how our Top 33 are forging new paths, challenging the status quo, and redefining what it means to lead in the AI space.

Join us for Part 4: Leadership in AI and Forging New Paths because values without action are just ideas, and leadership is where those ideas take shape.

Previous
Previous

Power of AI Done Well: Leadership in AI and forging new paths

Next
Next

The Power of AI Done Well: What Does ‘Responsible AI’ Really Mean?