Technology

Google’s NotebookLM Faces Voice Imitation Claim

by Sakshi Dhingra - 5 days ago - 5 min read

In a lawsuit that could reshape the boundaries between artificial intelligence and personal identity, longtime public radio journalist David Greene has filed legal action against Google, alleging that the tech giant’s AI-powered tool NotebookLM unlawfully replicated his voice without consent.

The complaint, filed in Santa Clara County Superior Court, claims that the male AI voice used in NotebookLM’s “Audio Overviews” feature closely mirrors Greene’s distinctive broadcast tone, cadence, and delivery style, to the point that friends, family, and colleagues began asking whether he had licensed his voice to Google.

Greene says he did not.

Who Is David Greene

For decades, David Greene built his professional identity around his voice. As a former co-host of Morning Edition on NPR, Greene became one of the most recognizable voices in American public radio. He later joined KCRW to host Left, Right & Center, where his measured delivery and calm, analytical tone continued to define his brand.

For a broadcaster, the voice is not just a tool, it is intellectual property, reputation, and livelihood wrapped into one.

According to the lawsuit, Greene argues that Google’s AI narration captures:

  • His signature pacing
  • His subtle inflections
  • His tonal rhythm
  • Even certain speech patterns and pauses

The complaint alleges that this resemblance created confusion among listeners who believed Greene had either endorsed or participated in the AI project.

What Is NotebookLM’s Audio Feature?

NotebookLM is Google’s AI research and summarization assistant designed to analyze uploaded documents and generate summaries, insights, and even conversational audio recaps.

Its “Audio Overview” function transforms written material into podcast-style discussions between two AI-generated voices. One of these male voices, Greene alleges, sounds strikingly like his own.

Google maintains that:

  • The voice was created using a paid professional voice actor.
  • The actor provided licensed recordings.
  • Greene’s voice was not used to train the system.

The company has publicly denied any wrongdoing.

The Core Allegations in the Lawsuit

Greene’s legal filing centers on the concept of “right of publicity” — a legal doctrine protecting individuals from unauthorized commercial use of their likeness, including voice.

The complaint asserts:

Unauthorized Vocal Likeness

Greene claims the AI voice is close enough to cause mistaken identity.

Reputational Risk

Because AI tools can generate content across countless topics, Greene argues that he could be associated with speech or viewpoints he never endorsed.

Lack of Consent or Compensation

Greene says he never agreed to license his voice, nor did he receive payment.

Commercial Exploitation

NotebookLM is part of Google’s expanding AI ecosystem, a product integrated into its broader technology portfolio. The lawsuit argues that the voice contributes to commercial value.

The filing reportedly includes third-party voice analysis suggesting a measurable similarity between Greene’s recorded broadcasts and the AI narration.

Google’s Defense

Google has responded firmly, stating that:

  • The AI voice is based on a contracted professional voice actor.
  • Proper licensing agreements were obtained.
  • The system was not trained on Greene’s personal recordings.
  • The company has framed the case as a misunderstanding rather than infringement.

However, Greene’s legal team argues that intent may not matter if the result creates public confusion.

Why This Case Is Bigger Than One Broadcaster

This lawsuit arrives amid intensifying scrutiny around AI-generated likenesses, from digital avatars to voice cloning.

At the heart of the debate lies a fundamental question:

When AI produces something that sounds like you, but was not directly trained on your recordings , is it still misappropriation?

Courts have historically addressed similar issues. In past landmark rulings involving celebrity voice imitation in advertising, judges determined that mimicking a recognizable voice could violate publicity rights even without directly copying recordings.

The Greene case could test how those principles apply in the age of generative AI.

The Ethical Fault Line: AI, Identity, and Consent

Generative AI models are trained on vast datasets to produce “statistical averages” of speech patterns. But when those averages begin resembling specific individuals, ethical gray zones emerge.

This case highlights three growing tensions:

Authenticity vs. Simulation

As AI-generated voices grow more natural, distinguishing between real and synthetic becomes harder.

Public Figures vs. Private Control

Broadcasters rely on vocal recognition professionally. If AI tools replicate that recognition, where does ownership lie?

Innovation vs. Permission

Should companies proactively check whether AI outputs resemble known personalities?

The answers remain unsettled.

What Happens Next?

The case now moves through early-stage litigation in California. If it proceeds, it could involve:

  • Forensic voice comparison experts
  • Internal AI training documentation
  • Licensing agreements for the voice actor
  • Consumer confusion surveys

A settlement is possible, but if the case reaches trial, it may set precedent for how courts evaluate AI-generated likeness claims.

Potential Industry Impact

If Greene prevails, the ruling could:

  • Require stricter voice similarity testing before AI deployment
  • Strengthen individual voice rights protections
  • Encourage AI companies to implement clearer attribution safeguards
  • Increase licensing agreements with professional voice talent

If Google prevails, it could signal broader legal tolerance for AI-generated voices that resemble — but do not directly copy, real individuals.

The Human Element

Beyond legal arguments, this dispute underscores a deeply personal concern.

For someone who spent decades refining vocal expression, shaping tone, timing, and emotional resonance , hearing a machine reproduce something similar can feel less like coincidence and more like appropriation.

As AI systems become increasingly capable of mimicking human nuance, the line between inspiration and imitation grows thinner.

Conclusion

David Greene’s lawsuit against Google is more than a dispute over one AI tool. It represents a pivotal moment in the evolving conversation about digital identity, intellectual property, and the human voice in the era of generative AI.

The outcome may help define how far artificial intelligence can go in replicating the most personal instrument humans possess, their voice.