10.03.2025 Newsletter
Protecting your own voice in times of AI voice cloning
Voice cloning as a new problem
Voices imitated by AI - known as "voice cloning" or "deepfake audio" - can increasingly be found on the internet. Voice material of just a few minutes is now enough to imitate voices deceptively realistically. With further progress in AI technology, it is becoming increasingly difficult to distinguish between original and imitation audio files.
Voice cloning can be used to foist any content on people - an enormous potential for abuse as a new problem area: in addition to the deception of third parties made possible by this, the reputation of the person concerned is also at stake. In addition, economic interests - such as those of professional dubbing artists, musicians, actors or podcasters - can be significantly impaired. The need for effective legal protection of one's own voice is therefore evident.
Protection against AI utilisation of voice recordings
The protection against unauthorised sound recordings and their publication is generally recognised, but cannot be transferred to voice cloning without further ado.
This is because German copyright law does not generally apply to new AI creations such as voice cloning. Only the underlying original recordings can constitute copyright-protected works.
While the ELVIS Act in Tennessee, USA, for example, has provided protection against the commercial misuse of AI-simulated voices since mid-2024, there have been no corresponding regulations in Germany and Europe to date. However, EU Regulation 2024/1689 (AI Act) introduces a labelling requirement for AI-generated content.
Nevertheless, the voice enjoys legal protection in Germany, in particular through the General Data Protection Regulation (GDPR), the General Right of Privacy (APR) and Section 823 (1) of the German Civil Code (BGB).
Protection of one's own voice as part of the APR via Section 823 BGB
Recourse to the general right of personality under Article 1 (1), Article 2 (1) of the German Basic Law ("APR") can offer imitated protection via Section 823 (1) of the German Civil Code (BGB).
The APR from Art. 1 para. 1, Art. 2 para. 1 GG i.V.m. § Section 823 BGB protects not only the right to one's own image but also the right to one's own voice as a recognisable feature of a person. The BGH already expressly recognised this in the "Marlene Dietrich ruling" (BGH, case no. I ZR 49/97).
The APR protects impersonators both from interference with personal honour - e.g. through an imitation that conflicts with their own ideal values - and from commercial exploitation. As early as 1989, the Higher Regional Court of Hamburg (case no. 3 W 45/89) clarified that the commercial imitation of a voice - for example for advertising purposes - may also be inadmissible.
Protection through commercialisation of your own voice via licensing agreements
Another protective approach lies in the active commercialisation of one's own voice: Canadian singer Grimes, for example, offers third parties a licence to use her AI-generated voice in return for a share of the revenue - an alternative to voice cloning.
Scope of protection, conclusion and outlook
Voices are not freely available data. They are subject to both privacy and data protection law. Which protection regime applies in individual cases depends on the type of infringement. However, the scope of protection is largely the same.
The following applies to data subjects: As soon as a voice is processed, imitated or published without consent, the unauthorised use can be prohibited and the deletion of the data can be insisted upon and information about the processing can be requested. In the event of violations of personal rights, civil law claims for injunctive relief and damages can also be considered. Enforcing the law remains problematic - especially if the voice imitation has already been published. Tracing back to the source on the internet is often difficult.
For companies that want to use voice cloning, compliance with data protection and privacy regulations is essential. The consent of the person concerned must be obtained before any use of a voice, be it for training an AI model or publishing voice imitations. This must apply specifically to the respective purpose and be revocable at any time. Failure to do so may result in severe fines .