News

“His Voice Was Human”: How Darkest Dungeon Is Protecting Wayne June’s Legacy From AI

“His Voice Was Human”: How Darkest Dungeon Is Protecting Wayne June’s Legacy From AI
The Completionist
The Completionist
Published
5/11/2026
Read Time
5 min

Red Hook Studios has drawn a hard line against using AI to recreate the late Wayne June’s iconic Darkest Dungeon narration. Here’s what their decision says about voice preservation, performer legacy, and AI ethics in modern game development.

Wayne June’s voice did more than narrate Darkest Dungeon. It set the temperature of the entire experience. Every failed torchlight run, every panic-inducing crit, every party wipe felt like it was being read out of a cursed tome by someone who had already seen your fate.

When June passed away in January 2025, it left a vacuum that no new DLC plan or sequel roadmap could really account for. Technically, modern tools could fill it. Practically, Red Hook Studios wants no part of that.

A studio that was offered AI, and still said no

In a Reddit post later picked up by outlets like PC Gamer and IGN, Red Hook co-founder and creative director Chris Bourassa revealed that Wayne June himself had given the studio explicit permission, in one of his final emails, to train an AI model on his voice. Red Hook had not asked for this. June was offering it.

Bourassa’s response was blunt. He said he would “never, ever erode his incredible and timeless performances by teaching a machine to sound like him,” calling June’s delivery “human” in a way that a cloned model could not capture. Instead of taking him up on the offer, the studio chose to support his family and leave his recordings as they are.

Red Hook’s decision goes against the most frictionless solution. With AI voice tools already hosting Wayne June presets and fans debating whether Darkest Dungeon can feel complete without him, there is clear demand. Technically, there is also opportunity. Creatively and ethically, Red Hook is treating that opportunity as a line they will not cross.

Why June’s narration is irreplaceable by design

Darkest Dungeon’s narration is not just connective tissue between fights. It is an active pressure system that pushes every mechanic deeper into your nerves. June’s gravelly baritone turns a 5-point stress tick into an existential crisis. His pauses let the failure sink in just long enough to feel like judgment.

Those performances were built in a specific context. Lines were recorded over years as the game evolved in Early Access, as classes were rebalanced, as systems were re-tuned. Bourassa and the team have talked before about how June’s reads often reshaped how they perceived their own writing. Some lines became iconic because of the way he bent the rhythm, not because of the words on the page.

An AI clone can be trained to mimic timbre, cadence and even some of those idiosyncrasies. What it cannot reproduce is the human collaboration that shaped which takes were chosen, which directions were given, and which imperfections they kept. The thing players remember is not just that the narrator sounds like Wayne June. It is that Wayne June was there doing it, in the room with their failures.

Preserving that means accepting a finite performance. His work on Darkest Dungeon and Darkest Dungeon II becomes a closed canon rather than an endlessly extensible resource. That limitation is exactly what Red Hook is protecting.

Voice preservation versus voice extension

Red Hook’s message also clarifies something that often gets blurred in AI debates: the difference between preserving a performance and extending it.

Game studios have been preserving voice work for decades. Old assets are re-mastered, lines are cleaned up, formats are updated so they still work on new platforms. That kind of preservation keeps an original recording accessible without changing what the actor did.

AI-driven voice cloning aims at something different. It tries to extend the performance into the future with new lines and context the actor never actually recorded. That is where questions of legacy get sharper.

With Wayne June, Red Hook is effectively saying that Darkest Dungeon already contains his final word on this world. Future games, expansions or spin-offs may still feature his original narration, but they will not fabricate new confessions, new barks, or new monologues from a model stamped with his name. The studio is treating his existing work as a complete body, not raw material.

Performer legacy as part of worldbuilding

What makes this decision land so strongly in the Darkest Dungeon community is that Wayne June is not a background utility voice. He is the franchise’s tonal anchor.

In many series, player characters and plot-critical NPCs rotate between entries. Narrators, though, are different. They are often the constant thread across sequels and formats. For Darkest Dungeon, June’s narrator defined what it felt like to even exist in that universe. Stress, blood, disease, torchlight mechanics all inherit their weight from how he described them.

By refusing to replace or replicate him, Red Hook is baking mortality into the fabric of the franchise. Just as heroes die and leave scars on your estate, the voice of the estate itself is finite. Going forward, any new narrator will not be a stealth replacement hidden behind AI. They will be a new presence, and fans will be asked to accept the world changing.

This is where ethics folds back into pure game design. Respecting a performer’s legacy is not only about contracts and permissions. It is about whether the universe they helped build is allowed to evolve honestly after they are gone.

How other studios are navigating AI and voice ethics

Red Hook is far from the only studio staring down AI voice questions, but their stance is one of the clearest.

Across the industry, you can see several emerging patterns in how teams are handling voice preservation and legacy without rushing to synthetic clones:

Studios are opting to retire characters with their actors instead of faking continuity. In some franchises, if a key performer passes away, their character’s story is concluded or written out rather than rebuilt with AI. It creates narrative discontinuity, but it also respects the idea that a character is partially owned, artistically, by the actor who brought them to life.

Others are aggressively archiving high-fidelity takes to future-proof their games without generating new dialogue. This can include recording more alternate takes during active production, anticipating DLC or expansions, and keeping those banked lines as the only approved material for later use. The archive is a buffer against the temptation to press the “just AI it” button years later.

Then there are studios experimenting with AI for pre-production and prototyping rather than shipping content. Some teams will temp in synthetic voices during early development to get timing and pacing right, but commit to hiring human actors for anything that leaves the building. That approach acknowledges AI as a tool while still keeping released performances fundamentally human.

These strategies share one trait: they treat voice work as authored, not infinitely remixable.

Consent is not the whole conversation

One of the trickiest parts of the Wayne June story is that he did, according to Bourassa, give permission for AI training. On paper, that is a clean ethical greenlight. In practice, Red Hook still walked away.

That choice underlines how limited pure consent can be as a guiding principle. A performer can sign off on something for any number of reasons: financial pressure, trust in a particular team, a desire to keep a character alive for fans. Years later, the cultural context might have shifted. New tools might emerge that make exploitation easier. Audiences might start to question whether they are listening to a person or a product.

By declining even with consent in hand, Red Hook is effectively saying that some uses of a person’s likeness are not compatible with how they want their games to be made, regardless of what would be legally possible. The company is prioritizing how it feels to interact with Wayne June’s work as a player: knowing that every line was one he actually stepped into a booth and delivered.

The practical costs Red Hook is accepting

None of this is a purely symbolic stance. From a production standpoint, refusing AI replication is a harder road.

Narration is central to Darkest Dungeon’s design language. It is cheaper and cleaner to keep a single, familiar voice threading through new content than to reimagine how information is delivered. Every time Red Hook decides not to synthesize Wayne June, they are committing to rewrite, restructure or cut ideas that depended on his presence.

The studio is also absorbing fan expectations. A portion of the audience unquestionably wants “more Wayne.” They have grown attached to specific inflections and catchphrases, and AI tech tantalizingly suggests that there could always be new ones waiting on a server somewhere. Saying no means accepting that some players will feel like a future Darkest Dungeon sounds less like the game in their head.

Instead of chasing exact continuity, Red Hook seems prepared to lean into change. That could mean future entries with different narrative devices, different voices, or stretches of silence where players are left to narrate the horror themselves.

What this means for Darkest Dungeon’s future

For the series, the result is a kind of creative hard mode. Any sequel or spin-off will live in the shadow of June’s work without the option to fake his return. That is a constraint, but it can also be a catalyst.

Without AI replication, any new narrator has to justify their existence, not just their imitation. Maybe they inhabit a different point of view, like a survivor years after the Hamlet’s fall, or a scholar piecing together accounts from the estate. Maybe narration becomes more fragmented, using journals and artifacts rather than a single omniscient voice. In each case, the absence of Wayne June is allowed to be felt rather than digitally papered over.

From a player’s perspective, returning to the original Darkest Dungeon and Darkest Dungeon II now carries extra weight. You are not just hearing a great performance. You are hearing all of it, knowing there will not be synthetic extensions or posthumous expansions recorded by a model. The lines you have memorized are the same ones future generations will hear.

A model for human-focused AI ethics in games

Red Hook’s stance is not an abstract manifesto. It is a concrete production choice that touches contracts, pipelines, budgets, fan relations and long-term planning. It shows one way a studio can navigate AI ethics: by treating human performance as the core asset, and treating new technology as something that has to bend around that principle instead of the other way around.

There will be other approaches. Some teams will experiment with opt-in AI models, detailed performance licenses, or hybrid workflows where actors supervise their own clones. The conversation is far from over.

For Darkest Dungeon, though, the answer is settled. Wayne June’s voice is not an endlessly renewable resource. It is a finished, human performance. Red Hook’s choice to protect it, even when doing otherwise might be easier or more lucrative, is quietly reshaping what responsible voice preservation and legacy can look like in game development.

In a genre obsessed with the toll that horrors exact on the human mind, it is fitting that the series’ most haunting voice is being allowed to rest.

Share: