Source of Truth
What it means to protect the part of you that may outlive you
My train of thought, on a daily basis, circles a recurring focal point--almost a nuclear theme: the inevitable death of my family, my friends, and myself. The impermanence of what we do and build is unavoidable. I’ve touched on this across several recent pieces because it remains the quiet architecture beneath everything else. Why am I writing about this again? Because the purpose-driven nature of our lives, the evolutionary signal we’ve spent millennia reinforcing, demands that we do. As sentient beings we make small decisions that shape our days, our relationships, and, when shared widely enough, our systems, societies, and this fragile experiment we call Earth.
But there is a sibling to that tragic grandeur, an adjacent truth with equal weight: our need for a source of truth.
In a world defined by impermanence, we reach for anchors. We try to construct something that can withstand decay long enough to guide us. Families create them. Institutions formalize them. Technologies attempt to mechanize them. And in every era, the absence of a shared source of truth cracks the foundation of whatever we are trying to build. Without one, purpose becomes performative. Signals distort, systems drift, societies fragment.
The paradox is that our desire for permanence arises precisely because nothing is permanent. And so we return, again and again, to the search for a source of truth--not as a fixed object, but as a stabilizing force. A compass. A shared reference point that lets us navigate the entropy.
And now, more than ever, we need to confront how truth becomes not just a social challenge, but a systems and design challenge in the AI era.
We are now able to infer realities (pun intended) against previous limitation and looming impermanence.
And in addition to the existential, let's address the economic.
Economics remain the blunt instrument behind most conflict and most order. Whoever understands the mechanics, steers the incentives, or controls the bottlenecks (Straight of Hormuz), often sets the terms of engagement for markets, for culture, and eventually for everyday life (gas prices).
That is why I have always been drawn to the mechanics and economics of industries, to the creative, technological or political. They end up shaping and influencing the other.
I've been waiting for a particular convergence in technology and infrastructure before writing this piece. That convergence has now arrived.
Where we come to the realization (at scale) that the voice is one of the most personal, potent, valuable, and singular expressions of a human life. Not just creatively, but now legally, economically and existentially.
That is why, three years ago, as the AI voice market was beginning to break into public view, I chose to go deep with Voice-Swap. Not because voice was fashionable, but because the direction of travel was already clear. Building this company, with colleagues and partners I trust, to meet this moment of convergence years later seemed inevitable to me. We knew there would eventually be a need for trusted infrastructure. The World Intellectual Property Organization has now launched its Technical Exchange Network, bringing together companies working on the real technical challenges at the intersection of AI, music, and copyright. We will be working alongside leadership from Google, Anthropic, SAG-AFTRA, Sureel, WMG, UMG among others. That matters because the next phase of this market will be defined by practical systems.
And we knew that policy, while necessary, would not be enough.
I recently spoke about the foundational concept (MUSICx), that trust must be built into the practical systems of AI’s next phase. That technology if designed properly, can become a kind of caretaker of what is most valuable to a person in the AI era.
The music business is already beginning to learn this lesson. Major deals between generative platforms and record or publishing companies may cover broad swaths of industry rights, but they do not resolve authority over the human voice and likeness embodied in a performance in this newly formed partnership application space. Those rights remain with the individual. The person behind the voice, or their estate, remains the true point of gravity and authority.
Do you see where I am going with this?
That is where this gets real.
We all think about our lives in terms of what we leave behind: our families, our property, our work, our estates, our inheritance. In the AI era, another category now belongs on that list. You must decide who the custodians of your voice are. Who holds it. Who can authorize it. Who can protect it. Who can prove its origin.
Because the world has moved quickly to establish the commercial domain around AI, but much more slowly in building the societal and technical infrastructure needed to preserve human agency inside it.
That work matters.
There are those of us who have spent the last several years focused not only on capability, but on the systems required to protect creativity, identity, and trust while still allowing technology to advance. That is the harder work. It is also the more necessary work.
And it forces a new realization: parts of us may now outlive us in usable form (ie the potential end of impermanence).
What do I mean by that?
My voice will likely live beyond my years on this planet. My voice will be used by those I trust to look after my estate, my identity, governed according to choices I make now. That possibility unsettles people because it challenges something deeply embedded in human life: the old assumption that mortality naturally limits expression. Technology no longer respects that boundary by default (ie challenging the evolutionary signal we've spend millennia reinforcing).
Three years in this space has made one thing very clear: when pressure rises, you find out what companies are built for and where their loyalties actually lie.
I know why I am a part of this frontier, and I know where I come from. I know who I have chosen to build with. And I am reassured that many others have trusted us with something incredibly consequential: their livelihoods, their rights, and their singularly human voices.
When a partner trains a voice model with Voice-Swap, they are not just creating a new digital asset. They are securing rights around it, productizing control over it, and establishing the means to authorize its use beyond any single platform. That distinction matters.
From writers, podcasters, educators and other public-facing creators to voice actors such as Lindsay Sheppard and Dave Fennoy, and artists such as Robert Owens and Imogen Heap, there is real pride in seeing this space attract people whose values align around integrity, authorship and care.
And as expected, more and more generative platforms are moving into voice.
Why?
Because voice is valuable terrain, it is hallow ground. It is emotionally powerful, commercially powerful, and deeply tied to identity. Control over voice is not just a product feature. It is a position of authority.
That is why creators need to read the terms carefully.
Many of these platforms do not simply want to let you train a voice model. They want broad rights in connection with that process. Rights that may extend across monetization, promotion, model improvement, and broader platform usage. Even where the language appears reassuring on the surface, the structure often points elsewhere.
They don't just want to give you the ability to train a voice model, they want to secure broad and sometimes questionable rights as part of the process.
Creators should not treat a generative music platform as the home of their canonical voice asset.
You would never leave your original masters sitting indefinitely on Spotify or another DSP’s servers and call that your archive. You keep the source files somewhere protected, accessible, and under proper control, while distribution happens through appropriate channels.
The same logic now applies to voice.
Your canonical voice model, your source of truth, should live with a rights focused provider built around consent, control, attribution, and licensing. From there, you can decide how that asset is deployed or shared, through APIs, through partners, through campaigns, through collaborations via VST plug-ins, through downstream creative tools. But the core asset should remain anchored somewhere designed to protect it.
That is the difference between infrastructure built to govern an asset and infrastructure built primarily to absorb it.
As I have said for years, as AI infrastructure extends its reach and its functions continue to branch outward, pay attention to who designed their systems with both the economic and existential stakes in mind. Look for those who understood from day one that this was not just about tooling. It was also about answering the custodianship question.
So heed this clearly folks:
Treat your voice — your source of truth — the way you would treat any valuable asset or important body of work. Give access only when you understand how it will be used. Train and host it where representation is controlled. Keep it where you can license it outward on your own terms: to another platform, to a producer, to an agency, to a collaborator, to a future partner not yet known to you. Have it live beyond your years, on your terms with yours left behind.
That future is not theoretical anymore.
Your voice may outlive you. The question is whether it does so on your terms.
All is possible if you are able to house what is yours where you know it will always be. That is only possible if what is yours is housed somewhere you trust, somewhere built to preserve provenance, authority, and control.
And above all, read the terms. Have your lawyer read them. Because once your voice enters the world under someone else’s structure, it becomes much harder to pull back its influence without both the legal and technical means to prove origin.
This decision is larger than most people realize.
Take care of yourself. Take care of the people around you. And take care of the source of truth that may one day speak for you when you no longer can.