
A few days ago, Steven Bartlett, creator of the popular podcast series The Diary of a CEO, shared on Facebook his experience in Davos. Grateful for the privilege of engaging with the elite gathered there — notably during a discussion led by Jessica Jensen, the newly appointed CMO of LinkedIn, in front of their counterparts from the GAFAM — he came away with a gem he shared enthusiastically: “The future belongs to the unromantic adapters.”
In other words, those who stay anchored primarily to their why, and less to their how. Those who are willing to experiment, to integrate new tools, even if it means no longer being fully in control of their process.
The turning point came when he was asked whether AI might replace him as a host. He replied: “The way I create could completely change. Maybe I won’t need a studio, a microphone — maybe I won’t even need to produce content … but why I create will never change.”
“At its core, what is being asked here is not merely to be open and curious about using AI…”
That premise reminded me of another: it’s Simon Sinek for the AI era. In 2016, with Start with Why, “hold tightly to the why, loosely to the how,” he was already telling us, paper and pencil in hand. Steven Bartlett brings it back in today’s fashion. It’s appealing. It makes sense. It sounds like wisdom. Steven and Simon Sinek are charismatic figures — not to say sexy — and you don’t need to be in Davos to feel tempted to agree. Except.
If we claim that method is no longer crucial, we should not forget that context matters even more. Start with Why was originally a tool for those building systems: companies, brands, narratives capable of mobilizing crowds. It is a strategy designed for actors who already possess power and platforms. When this framework is transposed onto individual adaptation in the face of AI, it fits less smoothly; what was once an inspiring leadership key becomes, here, a rather uncomfortable moral injunction.
Especially for a podcast host surrounded by a production team, there is a certain irony in preaching detachment from process. It is easier not to cling to procedure when we are not the ones handling the research, the editing, or the technical work. So yes, that changes everything. At its core, what is being asked here is not merely to be open and curious about using AI… it is to be humble about one’s craft. Yet humility is not an abstract moral stance; it is profoundly contextual. To be truly humble — a deeply vulnerable position, as you will agree — one must feel safe. Without a safety net, humility is not a virtue; it is a risk.
Of course, artists often work without guarantees, accepting that effort does not automatically yield results. But that is a chosen relationship, not a form of domination. Being romantic about one’s work — loving the journey as much as the impact, embracing ongoing experimentation — presupposes a context. Often a temporal one, but above all a material one. For many creators, the “how” is not an interchangeable detail: it is what defines their value, their differentiation, sometimes their economic survival. Renouncing one expertise to develop another is not a courageous philosophical stance; it is a risk with very real consequences.
“the anxiety AI provokes in creative fields is rarely aesthetic. It is a stress about time. If a prototype that once required three weeks can now be produced in three days, does it hold less value?”
Meanwhile, those who control platforms, models, and infrastructures have the time, resources, and teams to explore multiple scenarios. They do not know exactly what the future will look like, but they have the means to anticipate it — and often to shape it. Uncertainty, meanwhile, is not evenly distributed.
I may be preaching to the choir, but the anxiety AI provokes in creative fields is rarely aesthetic. It is a stress about time. If a prototype that once required three weeks can now be produced in three days, does it hold less value? And above all: what happens to the time that is freed up? What if those three weeks were not a loss, but precisely the space where the richness of the gesture takes shape — through sketches, doubts, revisions, passing through hands and under watchful eyes?
“Artificial intelligence becomes an ally of creation when it poses a challenge.”
If AI makes it possible to eliminate tedious technical iterations in order to restore time for dialogue, aesthetic research, and attention to experience, it does not empty creation of meaning: it displaces it. It compels us to think together, to inspire one another with renewed curiosity — fueled by the novelty, power, energy, and enthusiasm of the moment.
In reality, time is never neutral — even less so when it is freed up by tools that carry within them the values of those who design them. As Quebec philosopher Marc-Antoine Dilhac, engaged in the political challenges of responsible AI, reminds us, every technology inherits the priorities embedded in it: efficiency, profitability, control. The time it liberates can become a space for deliberation, transmission, and shared invention — or be immediately recaptured by other logics, depending on who holds decision-making power.
The real debate is not between long time and short time. It is between imposed time and chosen time. Artificial intelligence becomes an ally of creation when it poses a challenge. Ultimately, the epiphany may be this: if the question arises or becomes unavoidable… if you have the luxury of first reflecting on the time saved thanks to AI, make sure you also master the second reflection — the one on redistributed time. It is there, precisely, that the art of (re)claiming one’s time is played out.




