The Power of AI Done Well: Balancing innovation with ethics

In our last post, we explored the emerging trends. Our Top 33 award nominees are most excited about advances in AI for climate action, personalised healthcare, generative creativity, and agentic flows. But one truth remained clear: no matter how fast or fascinating these developments may be, they mean little without a solid ethical foundation.

In this sixth installation, we take a closer look at the balance between progress and principle. Because real innovation isn’t just about what’s possible, it’s about what’s responsible. And the good news? When ethics are treated not as a constraint but as a compass, AI can become safer, smarter, and more sustainable tool for everyone.

Behind the Buzz: Why Speed Alone Isn't a Strategy

A common phrase heard from developers when speaking about new advances in AI is “we don’t understand how it does that”. This phrase contributes to building a sense of mystery around AI, and to hopeful announcements that AI will be able to do everything a human can do or, in the more pessimistic take, that it will replace us all. Whether optimistic or pessimistic, innovations in AI seem to have something of a buzz about them, something that makes people and businesses around the world say: “I want in”.

IBM’s recent survey of over 2000 CEOs globally revealed insights that the media was very quick to pick up on, and with reason. The survey, aiming to understand businesses’ approach to AI adoption, pointed to a telling contradiction in leaders’ attitudes towards AI. While a striking 61% of respondents declared they are “actively adopting AI”, only 16% of surveyed CEOs reported their initiatives have led to the expected return on investment. The numbers are clear evidence of an unexpected disconnect between the apparent enthusiasm (or at least willingness) to embrace these new technologies and their actual benefits to the businesses. In fact, this opposition can be summarized by a third result in the survey: 64% of the respondents consider themselves under pressure to adapt to the evolving technologies quickly, despite not fully grasping how those technologies could, in fact, drive value. Klarna, the highly successful Swedish fintech company, provided a great example of how rushing to adopt AI could eventually backfire. The financial services group had initially reduced its labor force by more than 1000 employees -all to be replaced by AI chatbots- and is now, a few months later, re-hiring humans.

The Power of Guardrails: How Regulation Can Enable Progress

So, why the rush if the results are so uncertain? When browsing through ads and marketing campaigns of big corporations, the message is crystal clear when it comes to AI: don’t be late to the party. The consulting firm, McKinsey, in its headline of a 2025 report on AI in the workplace, stresses: “Our research finds the biggest barrier to scaling is not employees, who are ready, but leaders, who are not steering fast enough.” And, interestingly, ethical considerations such as safety occupy a central place only in the 3rd out of 5 chapters of this report. Speed and, importantly, innovation seem to be what it’s all about. This stance echoes a divisive tension at the geopolitical level, where the focus seems to be shifting more and more on winning the “AI arms race”, as some have called it, in a move to remind us of the disastrous effects of the nuclear arms race of the past century. The tension, in other words, opposes those who want to win the race at all costs and those who want to mitigate the risks of AI development, who value a more ethical approach.

Even the European Union, with its foundational belief in the power of norms, is expressing its concern of being left behind and struggling to find the right balance between promoting effective regulation and pushing for innovation. The agreement on the EU AI Act in 2023 was a positive step forward to address the elephant in the room that AI should not be developed without a solid set of guardrails and compliance measures. Yet it sparked a number of debates around it slowing down the EU’s already slower progress compared to the Chinese and American giants. This perception of regulation is one of the most effective methods of implementing some ethical structure as a hindrance to innovation, though understandable to a certain extent, is not entirely accurate. Regulation can indeed be used to limit some progress for specific purposes, such as regulations limiting or indeed, banning, the development of certain weapons. But regulation as a whole can do much more than that. It can encourage certain developments, set helpful guidelines, and it can promote durable resilience. As our Top 33 nominee Anna Karagianni phrased it, regulation can “act as the bridge, providing a structured framework that mitigates risks without stifling innovation.”

To draw a parallel with construction work, would you actively choose to live in a building whose construction was done without following any regulation, just like a child would build a Kapla tower? Likely not, and that would be a wise choice. When building a structure, architects and engineers need to follow not only the laws of physics, but also general regulations (e.g. safety measures on materials, weight bearing capacities, or fire exits) reflecting an essential consideration for the use of those structures by humans. To push the building comparison even further, if you innovate just for the purpose of innovating, what you get is the Burj Khalifa: the tallest building in the world since 2009 is shiny, impressive, and probably offers one of the best sunset views out there. Yet it is also excessively expensive, built on concrete foundations that give it a lifespan of only 100 years, and the top third of its height is closed to the public for safety reasons not to mention the environmental cost to build and maintain such a structure.

AI is no different: if you develop and deploy with neither regulation nor user focus, you might be the first to engineer a technology that is able to parrot being human, an impressive feat. But you will also have created a technology that can harm users or endanger their safety, and that will quickly be replaced by the next upgrade, the next tallest “AI building” with an equally short lifespan. Conversely, if strong policy guides development of AI, it may not only lay the grounds for more robust and responsible technology, it can also foster competitiveness, which in turn fosters innovation. And while we mention the economic advantages of regulation, let us not forget that regulation garners trust, and trustful users are the ones that keep using. In a KPMG report on people’s trust in AI, their findings showed that 61% of their respondents express wariness at using AI tools and 70% believe that regulation is necessary.  These figures are directly linked, as the study shows that most of the worry about AI relates to safety and security, which could be greatly improved by more effective regulation. Simply put, regulation is the best way forward, both for humanity and the economy.

Responsible by Design: Lessons from Healthcare and Beyond

That being said, regulation is not the only way to integrate an ethical vision into innovation. As our Top 33 nominees Rosalba Sotz and Kamila Camilo both expressed, ethics need to be the foundation from which we innovate. If we use ethics as our starting point, we can innovate better and with purpose. We can operate a switch from “what we want AI to do” to “which problem we want AI to solve”; innovating can be more than an end in itself, it can be a catalyst to bring solutions and progress for humanity and the planet. Think of the way medical innovation comes about: not only are ethical considerations at the core of medical work, they are drivers of innovation. It is by looking for a cure for cancer, for example, that we have made technological leaps in radiotherapy in the last decade - innovations that have allowed to alleviate suffering and save lives worldwide. AI, as a set of very powerful tools, also has the potential of contributing effective solutions to many complex problems. If we take just that little bit of time to look at the present, and to see how the future could look thanks to AI tools, our innovations will be all the more durable and positively impactful. Ethical considerations do not need to be -and are not- a burden, they should guide our thinking, structure our progress, and enhance our innovation.

A Blueprint for Durable Innovation

Ethical innovation isn’t slower; it’s stronger. It’s like building a skyscraper with a solid foundation, one that doesn’t collapse with the next technological wave. Without that foundation, even the flashiest AI models are just digital towers on shaky ground.

When we start with ethics, we unlock AI’s fullest potential: not just to automate, but to illuminate. Not just to optimise, but to uplift.

What’s Next

As we’ve seen, balancing innovation with ethics is not just a protective measure. It’s a source of clarity, trust, and long-term value. But what happens when we look further ahead?

In our next post, we explore what the future of responsible AI could and should look like. How do we move from fragmented fixes to a truly inclusive digital ecosystem? What bold ideas are our Top 33 envisioning and how can we help bring them to life?

Join us for Part 7: Envisioning the Future of Responsible AI, where hope, imagination, and accountability shape the road ahead.

Next
Next

The Power of AI Done Well: Emerging trends to feel excited about