Compliance & Internal Investigations / Antitrust Law and Merger Control12.06.2025 Newsletter
Key Client Briefing: Use of artificial intelligence in internal investigations
Internal investigations are an essential tool for uncovering potential misconduct in companies, averting damage and initiating the necessary measures. With increasing digitisation and the exponential growth in the volume of data, digital e-discovery tools have become an integral part of any internal investigation. Providers of such tools promise not only increased efficiency through the use of artificial intelligence ("AI"), but also deeper insights and more precise results. At the same time, the use of AI in internal investigations brings with it new challenges and legal issues under European Regulation 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence ("AI Act"). The AI Act addresses not only the developers of AI systems but also their users. Companies that use AI-based e-discovery tools are therefore subject to the requirements of the AI Act and face the high risk of fines of up to EUR 35 million or 7% of their total global annual turnover of the previous financial year. The AI Act has been in force since 1 August 2024 and the general training obligations under Art. 4 AI Act and the provisions on prohibited AI practices under Art. 5 AI Act have been applicable since February of this year. From 2 August 2026, the specific obligations for high-risk AI systems, which also play a major role in internal investigations, will also apply. This makes compliance with the regulatory requirements a key issue for all parties concerned.
Advantages of using AI for internal investigations
The AI Act aims to comprehensively regulate the use and development of AI in the EU member states. It creates a harmonised legal framework for AI and aims to ensure that AI technology is also developed and used in internal investigations by the parties concerned in a safe manner and in accordance with EU values, including respect for fundamental rights, the rule of law and democracy. AI is already optimising numerous internal investigation processes:
1. Increased efficiency and error reduction
AI can analyse and categorise large amounts of data in the shortest possible time. This minimises human errors that can occur due to fatigue or inattentiveness. In addition, AI enables language, format and source-independent analysis, which significantly reduces the effort required for translations or format adjustments.
Example: An AI-supported e-discovery tool can search through millions of e-mails and documents within a few hours, sort out irrelevant content and provide relevant data records for further analysis. Providers offer such functions that are specially optimised for large volumes of data.
2. Recognition of patterns and anomalies
AI algorithms can identify hidden patterns, trends and anomalies in data that could indicate compliance violations. This also includes analysing communication data to uncover any connections between people and processes.
Example: An AI system would be able to recognise conspicuous communication patterns in e-mails, e.g. unusually frequent interactions between certain employees shortly before a contract is signed. Providers integrate such pattern recognition functions into their platforms.
3. Support with interviews and reporting
AI can help with the preparation of interviews by suggesting relevant questions based on the analysed data. During interviews, it can create protocols and recognise inconsistencies in statements. Finally, AI can support the creation of reports by collating and structuring the relevant information.
Example: A system can automatically generate reports that summarise the most important findings from the analysed data and also insert visual representations such as diagrams or network analyses.
4. Background research
AI facilitates research in publicly accessible sources, such as press articles or databases, and thus contributes to a more precise clarification of the facts.
Example: Tools offer functions for searching publicly accessible data sources and integrating the relevant information into the internal investigation.
Regulation under the AI Act
Companies that use AI-supported tools in internal investigations often obtain the systems as standardised solutions from specialised providers. In these cases, the companies generally do not assume the role of provider of the AI system, as they did not develop it themselves (Art. 3 No. 8 AI Act). Rather, they generally assume the role of operator because they are using the systems of third parties under their own responsibility for operational purposes (Art. 3 No. 4 AI Act).
1. Operator obligations
As the operator of an AI system, companies are obliged to fulfil the specific requirements of the AI Act, which are based on the risk classification of the AI system used. The use of certain AI practices is completely prohibited in Art. 5 of the AI Act. Operators that use low-risk AI systems are primarily subject to general training obligations. Operators of high-risk AI systems are subject to particularly strict requirements. According to Art. 26 of the AI Act, the operator of a high-risk system is responsible in particular for ensuring that the AI system is used as intended, i.e. exclusively for the purpose specified by the provider. This includes strict compliance with the instructions provided by the provider and the continuous monitoring of operation to ensure the safety and compliance of the system.
2. Risk of fines
Failure to comply with the strict obligations for operators of high-risk AI systems can have serious consequences. According to Art. 99 (4) (e) of the AI Act, fines of up to 15 million euros or 3% of the company's global annual turnover may be imposed. Violations of the prohibitions of certain AI practices set out in Art. 5 of the AI Act can even be penalised with fines of up to 35 million euros or 7% of annual global turnover, Art. 99 (3) of the AI Act. This considerable risk of sanctions emphasises the need for companies to exercise the utmost care in their role as operators of an AI system and to consistently implement the regulatory requirements.
3. Prohibited AI practices (Art. 5 AI Act)
The AI Act prohibits the use of AI systems that use manipulative or exploitative practices to significantly influence the behaviour of individuals, Art. 5 (1) (a) AI Act. In the context of internal investigations, this could include, for example, the use of AI systems that deliberately manipulate an employee’s behaviour when making statements. The analysis of human statement behaviour in interviews by means of emotion recognition using AI is also prohibited under the AI Act, Art. 5 (1) (f) AI Act.
4. E-discovery tools rarely high-risk within the meaning of the AI Act
Operators of high-risk AI systems are subject to particularly strict obligations under the AI Act. The use of AI systems as part of internal investigations therefore raises the question of whether such systems should always be categorised as high-risk AI pursuant to the AI Act.
(a) High-risk AI in case of decisions that affect the employment relationship
Pursuant to Art. 6 (2) in conjunction with. Annex III No. 4 (b) AI Act, the high-risk classification of AI systems in internal investigations relates in particular to applications that are used for decisions that influence the conditions of employment relationships, such as promotions, dismissals or the evaluation of performance and behaviour.
A tool that evaluates the performance of employees based on e-mails, calendar entries and other digital traces and automatically forwards these evaluations to the HR department for purposes of a decision on promotions or dismissals therefore falls under the high-risk classification. In such cases, the AI system directly and intentionally influences decisions that are existential for the employees concerned. This justifies the categorisation as high-risk AI, as there are considerable risks to the fundamental rights of the persons affected.
(b) Personnel decision is not their objective
However, e-discovery tools are often unlikely to be categorised as high-risk AI systems within the meaning of Art. 6 (2) in conjunction with Annex III No. 4 (b) of the AI Act. There is much to suggest that their function and purpose differ fundamentally from the use cases listed in Annex III No. 4 of the AI Act. While the focus there is on the assessment of individual characteristics and attributes of employees, e-discovery tools are aimed at the factual analysis of data in order to uncover potentially unlawful behaviour.
The purpose of the system is decisive for the categorisation. According to Art. 3 No. 12 of the AI Act, this is decisive for the qualification of an AI system and is determined by the respective provider. Providers of e-discovery tools typically describe their systems as being used to analyse, structure and categorise data - but not to support or bring about personnel decisions.
In an antitrust investigation, for example, an e-discovery tool can be used to search e-mails for terms such as "price", "joint" or "fix" in order to identify indications of illegal agreements. The focus here is on the objective clarification of the facts - not the assessment of individual characteristics or features of employees.
Even if the results of an internal investigation can have consequences under labour law, the link between the AI system and the personnel decision is only indirect. The responsibility for labour law measures always lies with the management, which carries out normative and legal assessments.
(c) Not high-risk AI when there is limited influence on human decision-making
Art. 6 (3) of the AI Act provides for additional exemptions from the high-risk classification of an e-discovery tool if the AI system only performs preparatory tasks or identifies deviations in decision patterns that are subsequently subject to human review. The aim of the Act is to exempt systems from the catalogue of obligations for high-risk AI systems if they only perform supporting or preparatory tasks and do not carry out any independent substantive assessments, because their use then only poses low risks.
This exception is therefore of particular relevance for e-discovery tools whose purpose is limited to supporting investigations without making normative or legal judgements themselves. They help to increase efficiency without compromising the autonomy of human decision-making.
Typical application examples for e-discovery tools that fall under Art. 6 (3) of the AI Act include data filtering based on predefined criteria, such as with the help of search term lists. This also includes tools that recognise and mark communication patterns according to certain parameters, such as unusually frequent contacts between certain employees. Applications that collate and structure data from various sources to make it clearer and easier for human reviewers to analyse without making their own assessments or conclusions are also included.
5. Training obligations when using AI in internal investigations
Even if e-discovery tools are often used outside the scope of high-risk AI systems, users are subject to training obligations in particular.
Art. 4 of the AI Act obliges operators of AI systems to take measures to ensure that their staff and other persons involved in the operation and use of AI systems on their behalf have a sufficient level of AI literacy. The aim of AI literacy is to have skills in the use of AI, knowledge of the technology and an understanding of the respective use case. Before AI systems are used in internal investigations, the employees/users involved must therefore be comprehensively trained.
6. Data protection and labour law
Furthermore, aspects of labour law and data protection law are of central importance when using AI in internal investigations. For example, the co-determination rights of the works council under Sections 87 (1) No. 1, No. 6, 94 and 80 (2) of the German Shop Constitution Act (Betriebsverfassungsgesetz, BetrVG) may play a role, particularly when technical equipment is used to monitor behaviour. Under data protection law, it must be ensured in particular that the processing of personal data in the context of e-discovery measures complies with the requirements of Section 26 (1) sentence 1 of the German Federal Data Protection Act (Bundesdatenschutzgesetz, BDSG) and Art. 6 (1) (f) GDPR and, if applicable, Art. 9 GDPR, and that the information obligations under Art. 12 et seq. GDPR are complied with.
7. Outlook and recommendations for action
The continuous development of AI systems means that the technologies used in internal investigations are also becoming increasingly powerful and at the same time more intrusive. AI systems will increasingly be able to carry out more complex analyses and gain deeper insights into data. For operators of such systems, however, this also means that the effort required to justify why a specific AI system should not be categorised as high-risk AI and therefore should not be subject to the strict operator obligations under Art. 26 of the AI Act will increase.
Operators are therefore strongly advised to classify the AI systems they use at an early stage, monitor them continuously and train their staff. This includes checking the intended purpose of the system as well as regularly evaluating its actual use and potential risks. In order to fulfil these requirements, companies should take the following measures promptly:
- Review the current situation: Identify whether AI systems are being used in the company and whether they might be categorised as high-risk.
- Ensure compliance: Develop AI governance that is right for the company.
- Carry out training courses: Ensure that employees have the necessary AI literacy.
- Promote cooperation: If necessary - involve works councils, data protection officers and specialist departments at an early stage.
Conclusion
The integration of AI into internal investigations offers significant opportunities, particularly in terms of efficiency, accuracy and cost reduction. At the same time, its use requires a deep understanding of the technological possibilities and limitations as well as the legal framework, in particular the AI Act. The successful integration of AI into internal investigations also requires a comprehensive understanding of the framework conditions under labour law and data protection law. With the increasing importance of AI, the scope of AI governance and compliance measures will therefore also have to evolve. Companies should start implementing the AI Act without delay, as the general training obligations under Art. 4 of the AI Act and the provisions on prohibited AI practices in Art. 5 of the AI Act already apply.