Focus IT&C – 2nd Quarter 2025

We have compiled some important and exciting new developments and case law from IT law and data protection for you. We hope you enjoy reading it!

1. Update of draft German Act on the Federal Office for Information Security

2. Higher Regional Court of Cologne: Meta may use public posts for AI training - full text of judgement published

3. High-risk AI in the HR sector

4. Key client briefing: use of artificial intelligence in internal investigations

5. The new EU Data Act: Overview and status of its implementation in German companies

6. The German Accessibility Act - more accessibility on the internet from the end of June 2025

7. Oppenhoff’s DPA checker

1. Update of draft German Act on the Federal Office for Information Security [Gesetz über das Bundesamt für Sicherheit in der Informationstechnik, BSIG]: status and development of NIS-2 implementation in Germany

Last year, we reported several times on the planned transposition of the NIS-2 Directive (EU) 2022/2555 into German law (see Newsletters of 18 December 2024 and 26 March 2024). The aim of the NIS 2 Directive is to create uniform minimum standards for cybersecurity across Europe. This should ensure a higher level of protection for critical infrastructures, essential and important entities.

However, its implementation in Germany failed due to the end of the so-called "traffic light" coalition (see Newsletter dated 18 December 2024).

Current implementation status

After remaining "quiet" for several months, the implementation project is now back on track. A few weeks ago, an unofficial draft bill of the implementation law ("BSIG-E") became publicly known ("leaked"), which already contained initial adjustments to the final draft of the past legislative period ("BSIG-E (old)").

On 24 June 2025, the official draft bill for the BSIG-E was circulated to the respective associations for consultation in the version of 23 June 2025 and was published accordingly. The unofficial plan is for the cabinet draft to be submitted to the Bundestag before the summer break and for the law to be passed by the end of the year.

Contents of the new draft bill

The new draft bill of the BSIG-E is based on the draft version of the "traffic light" coalition (BSIG-E (old)), but contains several changes.

In addition to slightly modified wording in the catalogue of obligations in Section 30 BSIG-E and clarification to the effect that the obligation to maintain systems to defect attacks is only to apply to those parts of a company that actually qualify as KRITIS facilities as opposed to the entire company (Section 31 (2) BSIG-E), the type of facility that "manufactures or imports chemical substances", among others, has also been revised.

The most significant change, however, concerns the determination of the size of the company, i.e. the question of whether the thresholds of 50 employees or EUR 10 million annual turnover and annual balance sheet total ("key figures") relevant for the applicability of the BSIG-E have been exceeded.

The controversial Section 28 (3) BSIG-E (old) has been revised and reworded – as was not the case in the "unofficial" leak. According to Section 28 (3) BSIG-E

"when allocating to one of the types of facilities pursuant to Annexes 1 and 2 (...), business activities that are negligible with regard to the overall business activity of the facility may be disregarded".

This means that the previous pro-company regulation, according to which only the threshold values of a company that were attributable to the regulated activities were to be taken into account when determining the threshold value, no longer applies.

Accordingly, pursuant to the new version, the key figures of the entire company must now be used to determine the applicability thresholds, whereby those business activities that are negligible in comparison to the company's overall business activities can be disregarded.

In practice this means that companies will have to clarify as of which point business activities can be assumed to be negligible. As the NIS 2 Directive does not provide for a corresponding restriction of the threshold calculation, there are also doubts as to whether the new version complies with European law.

Conclusion and outlook

After such a long time, the present draft bill is now setting the national implementation process back into motion - a long overdue step, as Germany is already well behind schedule with the implementation.

The new version of Section 28 (3) BSIG-E has significant consequences for companies potentially be affected by it. Due to the change in the calculation of the key figures, more companies will fall within the scope of the BSIG-E compared to the previous regulation. Affected companies are therefore advised to reassess whether they are subject to the requirements of the BSIG-E.

It remains to be seen whether significant changes will be made in the further course of the legislative process and whether the planned timeframe for entry into force at the end of 2025 can be met. As most of the obligations will apply upon entry into force, potentially affected companies should follow developments closely and start implementing measures in good time.

Christian Saßenbach

Back

2. Higher Regional Court of Cologne: Meta may use public posts for AI training - full text of judgement published

On 23 May 2025, the Higher Regional Court (Oberlandesgericht, OLG) of Cologne ruled in summary proceedings that Meta Platforms Ireland Ltd. (Meta) is entitled to use data of users of the Facebook and Instagram services from the EU and EEA to train its AI model Meta AI (Large Language Model Meta AI, "Llama"). This also includes the processing of sensitive data (Art. 9 GDPR), such as health data (see press release). The eagerly awaited reasons for the judgement were published on 18 June 2025 (full text of the judgement can be found here).

Background of the proceedings

In April 2025, Meta announced that it would be training its AI model Llama with user data from the online services Facebook and Instagram from 27 May 2025. For the AI training, data of adult users that had been input via a "public" user account (so-called first-party data) and data on the interactions of users with the Llama AI model (so-called flywheel data) were to be used. The aim of the training was in particular to adapt Llama to regional practices. Meta’s based its planned data processing for this on the legal basis of legitimate interests (Art. 6 (1) f) GDPR).

The consumer advice centre Verbraucherzentrale Nordrhein-Westfalen e.V. (vznrw) intended to thwart this practice and preventatively applied for an injunction against the AI training. According to it, Meta had not sufficiently demonstrated that the corresponding data processing was necessary and proportionate to its legitimate interests. Furthermore, the AI training violated the ban on processing sensitive data (Art. 9 (1) GDPR).

Decision of the court

The OLG rejected vznrw's application for an injunction. Besides aspects of the Digital Market Act (DMA), data protection issues also and especially played a decisive role. During the summary examination conducted in the summary proceedings, the OLG ruled that the processing of the personal data contained in the first-party data is permissible without consent. In its decision, the OLG took into account the opinion of the European Data Protection Board (EDPB) 24/2024 on data protection aspects of the processing of personal data in the context of AI models of 17 December 2024 and the case law of the European Court of Justice (ECJ). The OLG’s reasoning is as follows:

1. Legitimate interests as a legal basis

According to the findings of the OLG, Meta can base the processing of first-party data for training the Llama AI model on the legal basis of a legitimate interest (Art. 6 (1) f) GDPR).

1.1 Legitimate interest

With its further development of Llama, Meta is pursuing a legitimate interest. Legitimate interests include economic interests, which – in the opinion of the EDPB and German data protection authorities - also include the training of AI models. Meta's interest is sufficiently clear, real, present and not only of a speculative nature. Ultimately, the purpose of the AI training is to optimise Llama with regardto regional practices.

1.2 Necessity

According to the OLG, Meta requires the processing of first-party data in order to pursue its legitimate interests. There are no alternative data processing methods that are equally suitable and yet less intrusive than the planned AI training: Large amounts of data are indispensable for training AI models. Using only flywheel datafrom interactions with Llama or synthetic data as opposed to first-party data would not produce comparable training results. As Meta is planning to train with "regional" data, web scraping or crawling - which are more intrusive anyway - would not be equally effective.

The OLG also emphasised that the necessity of the data processing does not have to be examined separately for each individual datum (data point). When training an AI model, masses of data are processed to generate patterns and probability parameters, which is why the individual datum (almost) never has any measurable influence. An examination of the necessity of each individual datum is therefore neither practicable nor makes any sense. Otherwise, it would be virtually impossible to conduct AI training on the legal basis of a legitimate interest.

1.3 Balancing of interests

According to the OLG, the interests and (fundamental) rights of the users do not outweigh the interests pursued by Meta. In line with the opinion of the EDPB and the case law of the ECJ, the decisive factor here is the impact of the data processing on the data subjects and whether they could have expected such processing.

1.3.1 Justified expectations

Firstly, the data subjects had to expect the processing of their data from 26 June 2024 at the latest. This was the date on which Meta informed its users that public user posts would be used to train Llama in the future. Although Meta later withdrew this announcement, the AI training was only unexpected for the data subjects prior to the announcement. The interests of the data subjects do not outweigh Meta's interests insofar.

The OLG also makes particular reference to the legislative objective that the EU is to assume a leading role in AI development (recital 8 of the AI Act). This presupposes a uniform legal framework (recital 1 of the AI Act).

1.3.2 Overriding interests of the provider

The OLG recognises that the processing of first-party data constitutes an encroachment upon the right to self-determination and points out problems in training AI. In the case of generative AI models, there is a risk of a lack of transparency and the processing of third-party data, particularly that of minors. Furthermore, this may restrict the right to erasure, for example in the case of posts by third parties or data already entered into the AI.

The OLG nevertheless considered the interests pursued by Meta with the AI training to outweigh the risks:

  • Users are able to retract the public status of their "public user account" and/or the posts set by them as "public", or to object to the data processing. In this case, data already incorporated into the AI model will no longer be used in future training sessions and will gradually fade.
  • In addition, the risks for data subjects are limited by de-identification measures. Meta removes full names, e-mail addresses and comparable directly identifying data, for example.
  • The way in which AI is used also reduces the risks for data subjects. Although the data is not completely anonymised, AI models are not data archives, but generally only consist of probability parameters. The OLG considers the remaining risk of the AI reconstructing ("remembering") and re-issuing personal data to be low.
  • Possible subsequent infringements when using the AI model (such as disinformation, manipulation) are not sufficiently foreseeable. Moreover, these can also be prosecuted separately.
  • The data at issue has ultimately already been published on the internet anyway. Insofar, there is no threat of any new disadvantages for the data subjects, e.g. of a professional or social nature.

2. No ban on processing special categories of personal data

Another important finding concerns the processing of sensitive data (Art. 9 (1) GDPR). The OLG also did not consider the processing of such sensitive data in the context of the AI training at issue to be unlawful. Although it made reference to the broad scope of application of sensitive data, it did not consider there to be a general ban on processing.

2.1 Exception for self-published data

Firstly, there is to be an exception (Art. 9 (2) e) GDPR) in the event of self-entered sensitive data. This is data that has been made public by the user themselves. However, this exception does not apply if the posts published by the user (also) contain third-party data. The waiver of the protection of sensitive data can only be declared by each data subject themselves.

2.2 No ban on processing third-party data

In the case in dispute, however, the OLG also does not consider the data of third parties to be covered by the ban on processing sensitive data (Art. 9 (1) GDPR).

2.2.1 Activity-related peculiarity

In its reasoning, the OLG draws a parallel to the jurisprudence of the ECJ on de-listing in case of the search engine Google (ECJ, judgement of 24 September 2019, C-136-17 - GC and others). There, the ECJ emphasised the activity-related particularities of search engine operators, in particular the difficulties for them to determine ex ante which data they process before setting the link, given the large amount of linked content. On this basis, the ECJ ruled that, although this does not justify exempting search engine operators from the ban on processing sensitive data (Art. 9 GDPR), it does have an impact on the scope of the operator's responsibilities and obligations (loc. cit., margin No. 45). Against this background, the processing ban of Art. 9 GDPR cannot be applied ex ante and systematically to a search engine operator, but rather by way of an ex post review of the processing at the data subject’s request (loc. cit., margin No. 47). Otherwise, the business model of a search engine would hardly be economically viable any more.

Here, an "activation" of the prohibition is required. The OLG applies this restrictive interpretation to the AI training at issue. Similar to the search engine operator, Meta is unlikely to have any influence on which specific data is included in the AI training or be able to check each individual content in an economically justifiable manner prior to the AI training.

According to this criteria, the third parties concerned should have first applied for the removal of their specific types of personal data from the published post or the training data set. Without this "activation", the processing ban could not apply to the AI training.

2.2.2 Pioneering role in AI development

In addition, the desired leading role in the development of AI could hardly be achieved without the processing of sensitive third-party data. The EU legislator wants to create a uniform legal framework for this pioneering role. The training of AI models requires large data sets, which often contain sensitive data. A general ban on the processing of sensitive third-party data would effectively make AI training impossible and run contrary to a uniform legal framework.

Finally, the OLG points out that the EU legislator was aware of the need for AI training with large amounts of data. Consequently, it introduced exemptions for targeted data processing (such as Art. 10 (5) of the AI Act for training high-risk AI in order to prevent bias). The OLG concludes from this that the EU legislator certainly cannot have assumed that the non-targeted processing of sensitive data - which is the subject of the dispute here - is unlawful. Otherwise, it would have seen a need to regulate the corresponding applicability regulations for this as well and would have created such regulations. Since it did not do so, the applicability of the ban on processing (Art. 9 (1) GDPR) could not have been intended by the legislator.

Conclusion

The judgement of the OLG Cologne opens up the possibility for providers of AI models to train them in compliance with data protection regulations under conditions comparable to those in the case up for decision - even regarding sensitive data.

However, the decision could also be important for companies that operate AI systems as deployers, in which AI models that have been trained in this way are integrated. In its opinion of December 2024, the EDPB stated that unlawful prior data processing may, in exceptional cases, affect the lawfulness of subsequent data processing in the context of the deployment of the AI system. If the upstream AI training is already considered lawful, this provides further arguments in favour of the lawful operation by companies using the AI as deployers.

Please note that the judgement was issued in summary proceedings (preventive legal protection) in which the Higher Regional Court was the first and last instance and had no opportunity to refer questions of law relevant to the decision to the ECJ. It is anticipated that this will be followed by main proceedings, in the context of which a referral to the ECJ would be possible. It remains to be seen how the ECJ will position itself in this case. Companies should keep a close eye on further developments concerning the data protection assessment of AI training.

We will keep you up-to-date on new developments.

Valentino Halim

Back

3. High-risk AI in the HR sector

With the AI Act, the EU has created the world's first comprehensive legal framework for artificial intelligence. The new regulations are particularly important in the HR sector, where AI offers a wide range of potential applications. The AI Act has been in force since 1 August 2024 and the employer training obligations under Art. 4 of the AI Act and the provisions on prohibited AI practices under Art. 5 of the AI Act have been applicable since February of this year. From 2 August 2026, the specific obligations for high-risk AI systems, which play a central role in HR, will also apply. As the implementation of IT systems in companies can generally take a considerable amount of time - especially in light of the strict requirements for AI systems - we are already taking this opportunity to provide you with an overview of which AI systems are categorised as high-risk under the AI Act in the HR sector and what obligations this entails for employers.

1. What is an AI system?

The term "AI system" is the central point of reference for the AI Act. According to Art. 3 No. 1 of the AI Act, an AI system is software that is not based solely on rules programmed by humans, but itself derives how outputs are created. Many software programmes already use so-called Large Language Models (LLMs) such as ChatGPT, DeepSeek, Claude or other AI models. Only software that works purely deterministically is not considered AI within the meaning of the AI Act.

2. High-risk AI systems in the HR sector

The AI Act categorises AI systems into different risk categories: systems that implement prohibited practices (Art. 5 AI Act), high-risk systems (Art. 6 AI Act) and systems with low risk. The focus of regulation is on high-risk systems. There is a particularly high risk in the HR sector. This is because the following HR applications are categorised as high-risk:

(a) Personnel selection and recruiting: systems that screen and evaluate applications, such as automated candidate screening tools and personalised communication with applicants* like "Cornerstone - Galaxy" or video interview platforms with language analyses like "My Interview".

(b) Decisions on employment conditions: AI systems that influence salary increases, promotions or terminations, also known as so-called "human capital management systems (HCM systems) such as "Workday".

(c) Assignment of tasks and personnel deployment planning: systems that assign tasks based on individual behaviour or personal characteristics (task management systems) such as "Asana" and tools for individual personnel deployment planning that predict staffing requirements and detect anomalies - such as overtime - such as the AI-supported workforce management tool from "UKG (Ultimate Kronos Group)".

(d) Monitoring and evaluation of employees: tools that analyse employee performance or behaviour, for example by evaluating work patterns or performance such as "Microsoft Viva Insights".

This is regulated in Art. 6 (2) in conjunction with Annex III No. 4 of the AI Act. These definitions are broad and HR software that falls into one of the areas mentioned (by including even just one of these functions) is already subject to the strict regulation for high-risk AI systems.

Point (d) in particular could be a major gateway, as it is very similar to the wording of Section 87 (1) No. 6 of the German Shop Constitution Act (Betriebsverfassungsgesetz, BetrVG), according to which co-determination rights exist when introducing technical equipment that monitors the behaviour or performance of employees. According to case law, this is ultimately the case for any software that processes employee data. If this is applied to the AI Act, any HR software that uses AI would be a high-risk AI system. Even harmless AI-supported applications in time-recording systems, tools for holiday planning or programmes for creating performance reports could then fall under the strict requirements for high-risk AI.

The majority of software providers currently advertise AI-supported functions. However, whether the system being offered actually constitutes an AI system or even a high-risk system within the meaning of the AI Act needs to be analysed in detail. At the latest when the specific obligations for high-risk AI systems come into force on 2 August 2026, software providers will also adapt their marketing slogans.

The only possible rescue option in this case would be in accordance with Art. 6 (3) of the AI Act, which excludes systems from the high-risk area that only perform preparatory tasks or structure data without carrying out independent content assessments.

3. Obligations for users of high-risk AI systems

Companies that use AI-supported HR tools often obtain the systems as solutions from specialised providers. In these cases, the companies are not the providers of the AI system, as they have not developed it themselves (Art. 3 No. 8 AI Act). Instead, they take on the role of the deployer (Art. 3 No. 4 of the AI Act) and are therefore subject to strict legal requirements when using high-risk AI, which aim to protect the safety and fundamental rights of the employees concerned and at the same time promote innovation in the EU. According to Art. 26 of the AI Act, these requirements include in particular

  • Complying with operating instructions (para. 1): Operators must ensure that the AI system is used in accordance with the operating instructions provided by the provider.
  • Ensuring human oversight (para. 2): Supervision of the AI system must be assigned to qualified persons who have the necessary competence and authorisation.
  • Checking input data (para. 4): Deployers are obliged to ensure that the input data corresponds to the intended purpose of the AI system and is sufficiently representative.
  • Operational monitoring and reporting (para. 5): Deployers must continuously monitor the operation of the AI system and, in the event of risks or serious incidents, immediately inform the provider, distributor and relevant authorities.
  • Retaining logs (para. 6): Deployers must retain automatically generated logs of the AI system for at least six months, unless other legal requirements apply.
  • Notifying employees and their representatives (para. 7): Before a high-risk AI system is put into operation in the workplace, employees and their representatives, i.e. works councils, must be informed about the use of the system

Since February of this year, employers have also already been obliged to train their employees involved with AI systems, regardless of the level of risk of an AI system (Art. 4 of the AI Act). Technical knowledge, professional experience, training and further education as well as the specific context of AI use must be taken into account.

4. Consequences of infringements and recommendations for action

Art. 99 of the AI Act provides for severe sanctions in the event of violations of the AI Act. Violations of the obligations for high-risk AI systems can result in penalties of up to 15 million euros or 3% of annual global turnover, Art. 99 (4) of the AI Act.

To ensure your timely preparation, the following measures should be taken now:

  • Check the current situation: Identify whether your company uses AI systems and whether these could be categorised as high-risk.
  • Ensure your compliance: Develop AI governance that is suitable for your company.
  • Carry out training measures: Ensure employees have the necessary AI literacy.
  • Promote collaboration: Involve works councils, data protection officers and specialist departments at an early stage, if necessary.

Conclusion

The use of AI systems in HR opens up a wide range of opportunities for companies to optimise processes and make them more efficient. At the same time, however, it entails considerable liability risks if the strict requirements of the AI Act are breached. It is essential for employers to take measures at an early stage to counter these risks, from identifying potentially high-risk AI systems to developing company-specific AI governance and training employees. We would be happy to support you in this connection.

Dr. Marc Hilber, Dr. Axel Grätz, Jörn Kuhn, Annabelle Marceau

Back

4. Key client briefing: use of artificial intelligence in internal investigations

Internal investigations are an essential tool for uncovering potential misconduct in companies, averting damage and initiating the necessary measures. With increasing digitisation and the exponential growth in the volume of data, digital e-discovery tools have become an integral part of any internal investigation. Providers of such tools promise not only increased efficiency through the use of artificial intelligence ("AI"), but also deeper insights and more precise results. At the same time, the use of AI in internal investigations brings with it new challenges and legal issues under European Regulation 2024/1689 of 13 June 2024 laying down harmonised rules on artificial intelligence ("AI Act"). The AI Act addresses not only the developers of AI systems but also their users. Companies that use AI-based e-discovery tools are therefore subject to the requirements of the AI Act and face the high risk of fines of up to EUR 35 million or 7% of their total global annual turnover of the previous financial year. The AI Act has been in force since 1 August 2024 and the general training obligations under Art. 4 AI Act and the provisions on prohibited AI practices under Art. 5 AI Act have been applicable since February of this year. From 2 August 2026, the specific obligations for high-risk AI systems, which also play a major role in internal investigations, will also apply. This makes compliance with the regulatory requirements a key issue for all parties concerned.

Advantages of using AI for internal investigations

The AI Act aims to comprehensively regulate the use and development of AI in the EU member states. It creates a harmonised legal framework for AI and aims to ensure that AI technology is also developed and used in internal investigations by the parties concerned in a safe manner and in accordance with EU values, including respect for fundamental rights, the rule of law and democracy. AI is already optimising numerous internal investigation processes:

1. Increased efficiency and error reduction

AI can analyse and categorise large amounts of data in the shortest possible time. This minimises human errors that can occur due to fatigue or inattentiveness. In addition, AI enables language, format and source-independent analysis, which significantly reduces the effort required for translations or format adjustments.

Example: An AI-supported e-discovery tool can search through millions of e-mails and documents within a few hours, sort out irrelevant content and provide relevant data records for further analysis. Providers offer such functions that are specially optimised for large volumes of data.

2. Recognition of patterns and anomalies

AI algorithms can identify hidden patterns, trends and anomalies in data that could indicate compliance violations. This also includes analysing communication data to uncover any connections between people and processes.

Example: An AI system would be able to recognise conspicuous communication patterns in e-mails, e.g. unusually frequent interactions between certain employees shortly before a contract is signed. Providers integrate such pattern recognition functions into their platforms.

3. Support with interviews and reporting

AI can help with the preparation of interviews by suggesting relevant questions based on the analysed data. During interviews, it can create protocols and recognise inconsistencies in statements. Finally, AI can support the creation of reports by collating and structuring the relevant information.

Example: A system can automatically generate reports that summarise the most important findings from the analysed data and also insert visual representations such as diagrams or network analyses.

4. Background research

AI facilitates research in publicly accessible sources, such as press articles or databases, and thus contributes to a more precise clarification of the facts.

Example: Tools offer functions for searching publicly accessible data sources and integrating the relevant information into the internal investigation.

Regulation under the AI Act

Companies that use AI-supported tools in internal investigations often obtain the systems as standardised solutions from specialised providers. In these cases, the companies generally do not assume the role of provider of the AI system, as they did not develop it themselves (Art. 3 No. 8 AI Act). Rather, they generally assume the role of operator because they are using the systems of third parties under their own responsibility for operational purposes (Art. 3 No. 4 AI Act).

Find out more here...

1. Operator obligations

As the operator of an AI system, companies are obliged to fulfil the specific requirements of the AI Act, which are based on the risk classification of the AI system used. The use of certain AI practices is completely prohibited in Art. 5 of the AI Act. Operators that use low-risk AI systems are primarily subject to general training obligations. Operators of high-risk AI systems are subject to particularly strict requirements. According to Art. 26 of the AI Act, the operator of a high-risk system is responsible in particular for ensuring that the AI system is used as intended, i.e. exclusively for the purpose specified by the provider. This includes strict compliance with the instructions provided by the provider and the continuous monitoring of operation to ensure the safety and compliance of the system.

2. Risk of fines

Failure to comply with the strict obligations for operators of high-risk AI systems can have serious consequences. According to Art. 99 (4) (e) of the AI Act, fines of up to 15 million euros or 3% of the company's global annual turnover may be imposed. Violations of the prohibitions of certain AI practices set out in Art. 5 of the AI Act can even be penalised with fines of up to 35 million euros or 7% of annual global turnover, Art. 99 (3) of the AI Act. This considerable risk of sanctions emphasises the need for companies to exercise the utmost care in their role as operators of an AI system and to consistently implement the regulatory requirements.

3. Prohibited AI practices (Art. 5 AI Act)

The AI Act prohibits the use of AI systems that use manipulative or exploitative practices to significantly influence the behaviour of individuals, Art. 5 (1) (a) AI Act. In the context of internal investigations, this could include, for example, the use of AI systems that deliberately manipulate an employee’s behaviour when making statements. The analysis of human statement behaviour in interviews by means of emotion recognition using AI is also prohibited under the AI Act, Art. 5 (1) (f) AI Act.

4. E-discovery tools rarely high-risk within the meaning of the AI Act

Operators of high-risk AI systems are subject to particularly strict obligations under the AI Act. The use of AI systems as part of internal investigations therefore raises the question of whether such systems should always be categorised as high-risk AI pursuant to the AI Act.

(a) High-risk AI in case of decisions that affect the employment relationship

Pursuant to Art. 6 (2) in conjunction with. Annex III No. 4 (b) AI Act, the high-risk classification of AI systems in internal investigations relates in particular to applications that are used for decisions that influence the conditions of employment relationships, such as promotions, dismissals or the evaluation of performance and behaviour.

A tool that evaluates the performance of employees based on e-mails, calendar entries and other digital traces and automatically forwards these evaluations to the HR department for purposes of a decision on promotions or dismissals therefore falls under the high-risk classification. In such cases, the AI system directly and intentionally influences decisions that are existential for the employees concerned. This justifies the categorisation as high-risk AI, as there are considerable risks to the fundamental rights of the persons affected.

(b) Personnel decision is not their objective

However, e-discovery tools are often unlikely to be categorised as high-risk AI systems within the meaning of Art. 6 (2) in conjunction with Annex III No. 4 (b) of the AI Act. There is much to suggest that their function and purpose differ fundamentally from the use cases listed in Annex III No. 4 of the AI Act. While the focus there is on the assessment of individual characteristics and attributes of employees, e-discovery tools are aimed at the factual analysis of data in order to uncover potentially unlawful behaviour.

The purpose of the system is decisive for the categorisation. According to Art. 3 No. 12 of the AI Act, this is decisive for the qualification of an AI system and is determined by the respective provider. Providers of e-discovery tools typically describe their systems as being used to analyse, structure and categorise data - but not to support or bring about personnel decisions.

In an antitrust investigation, for example, an e-discovery tool can be used to search e-mails for terms such as "price", "joint" or "fix" in order to identify indications of illegal agreements. The focus here is on the objective clarification of the facts - not the assessment of individual characteristics or features of employees.

Even if the results of an internal investigation can have consequences under labour law, the link between the AI system and the personnel decision is only indirect. The responsibility for labour law measures always lies with the management, which carries out normative and legal assessments.

(c) Not high-risk AI when there is limited influence on human decision-making

Art. 6 (3) of the AI Act provides for additional exemptions from the high-risk classification of an e-discovery tool if the AI system only performs preparatory tasks or identifies deviations in decision patterns that are subsequently subject to human review. The aim of the Act is to exempt systems from the catalogue of obligations for high-risk AI systems if they only perform supporting or preparatory tasks and do not carry out any independent substantive assessments, because their use then only poses low risks.

This exception is therefore of particular relevance for e-discovery tools whose purpose is limited to supporting investigations without making normative or legal judgements themselves. They help to increase efficiency without compromising the autonomy of human decision-making.

Typical application examples for e-discovery tools that fall under Art. 6 (3) of the AI Act include data filtering based on predefined criteria, such as with the help of search term lists. This also includes tools that recognise and mark communication patterns according to certain parameters, such as unusually frequent contacts between certain employees. Applications that collate and structure data from various sources to make it clearer and easier for human reviewers to analyse without making their own assessments or conclusions are also included.

5. Training obligations when using AI in internal investigations

Even if e-discovery tools are often used outside the scope of high-risk AI systems, users are subject to training obligations in particular.

Art. 4 of the AI Act obliges operators of AI systems to take measures to ensure that their staff and other persons involved in the operation and use of AI systems on their behalf have a sufficient level of AI literacy. The aim of AI literacy is to have skills in the use of AI, knowledge of the technology and an understanding of the respective use case. Before AI systems are used in internal investigations, the employees/users involved must therefore be comprehensively trained.

6. Data protection and labour law

Furthermore, aspects of labour law and data protection law are of central importance when using AI in internal investigations. For example, the co-determination rights of the works council under Sections 87 (1) No. 1, No. 6, 94 and 80 (2) of the German Shop Constitution Act (Betriebsverfassungsgesetz, BetrVG) may play a role, particularly when technical equipment is used to monitor behaviour. Under data protection law, it must be ensured in particular that the processing of personal data in the context of e-discovery measures complies with the requirements of Section 26 (1) sentence 1 of the German Federal Data Protection Act (Bundesdatenschutzgesetz, BDSG) and Art. 6 (1) (f) GDPR and, if applicable, Art. 9 GDPR, and that the information obligations under Art. 12 et seq. GDPR are complied with.

7. Outlook and recommendations for action

The continuous development of AI systems means that the technologies used in internal investigations are also becoming increasingly powerful and at the same time more intrusive. AI systems will increasingly be able to carry out more complex analyses and gain deeper insights into data. For operators of such systems, however, this also means that the effort required to justify why a specific AI system should not be categorised as high-risk AI and therefore should not be subject to the strict operator obligations under Art. 26 of the AI Act will increase.

Operators are therefore strongly advised to classify the AI systems they use at an early stage, monitor them continuously and train their staff. This includes checking the intended purpose of the system as well as regularly evaluating its actual use and potential risks. In order to fulfil these requirements, companies should take the following measures promptly:

  • Review the current situation: Identify whether AI systems are being used in the company and whether they might be categorised as high-risk.
  • Ensure compliance: Develop AI governance that is right for the company.
  • Carry out training courses: Ensure that employees have the necessary AI literacy.
  • Promote cooperation: If necessary - involve works councils, data protection officers and specialist departments at an early stage.

Conclusion

The integration of AI into internal investigations offers significant opportunities, particularly in terms of efficiency, accuracy and cost reduction. At the same time, its use requires a deep understanding of the technological possibilities and limitations as well as the legal framework, in particular the AI Act. The successful integration of AI into internal investigations also requires a comprehensive understanding of the framework conditions under labour law and data protection law. With the increasing importance of AI, the scope of AI governance and compliance measures will therefore also have to evolve. Companies should start implementing the AI Act without delay, as the general training obligations under Art. 4 of the AI Act and the provisions on prohibited AI practices in Art. 5 of the AI Act already apply.

Dr. Daniel Dohrn & Dr. Axel Grätz

Back

5. The new EU Data Act: Overview and status of its implementation in German companies

From 12 September 2025, the EU Data Act (Regulation (EU) 2023/2854) will be directly applicable law throughout the EU. It is high time for companies to check whether they are affected by the new provisions of the Data Act and which requirements they can implement.

The Data Act is a central component of the EU data strategy and is intended to strengthen the European data economy and promote innovation. It contains a "colourful bouquet" of different areas of regulation. This includes, for example, access to user-generated data from connected devices, an easier switching from one cloud provider to another and unfair contractual terms on the use of data.

This first part of our series of articles on the Data Act provides an overview of its various areas of regulation.

Right of access to data (Chapters II and III DA)

The Data Act obliges providers of connected products and connected services to make the generated data directly accessible to users on the connected device, insofar as this is technically feasible (Art. 3 DA), e.g. through a download function. Otherwise, the provider must make the data accessible to the user himself or to a third party designated by the user upon request (Art. 4, 5 DA). Connected products include, for example, connected cars, connected medical devices, smart watches and smart household appliances. Connected services include digital applications that interact with these products, such as apps for controlling smart home systems.

This regulation enables after-market services and other innovative or customised services related to the connected product or service. For example, insurance companies can use the data generated to design customised, risk-based premium models.

However, data access rights do not apply without restriction. The data holder may refuse access if this is necessary to protect business secrets (Art. 4 (6), 5 (9) DA) or to comply with the requirements of the General Data Protection Regulation (GDPR) (Art. 4 (12), 5 (13) DA). In addition, third parties are prohibited from using the data to develop a competing product to the connected product.

Prohibition of unfair contractual terms (Chapter IV DA)

If a provider makes data accessible to a third party at the user's request, it must conclude a contract with the third party regarding the further use of the data. The Data Act - similar to the German law governing general terms and conditions - provides for a review of the content of the clauses of such data utilisation contracts. Insofar as they are not negotiated but are imposed by a contracting party, they must not be unfair, i.e. grossly deviate from fair business practice or violate good faith. Unfair terms are not legally binding on the contracting party affected (Art. 13 (1) DA).

Provision of data to public authorities (Chapter V DA)

In addition to providing data to users and third parties, data holders also have to provide data to certain public sector bodies and EU institutions upon request (Art. 14 et seq. DA). This requires an exceptional need to use the data, e.g. to fulfil government tasks or due to a public emergency. In exceptional situations such as the COVID pandemic, certain data may be of relevance to public authorities in order to take appropriate measures.

"Cloud switching" (Chapter VI DA)

Providers of cloud and other data processing services (e.g. IaaS, PaaS, SaaS, DaaS, edge services) must enable their B2B and B2C customers to easily switch to other providers or to an on-premise solution in accordance with the Data Act (Art. 23 et seq. DA). In particular, providers are obliged to provide the necessary support services for a switch. This includes, for example, the export and transfer of data and digital assets. Customers' rights must be set out in a written contract with the provider. From 12 January 2027, providers must offer these support services to customers free of charge (Art. 29 (1) DA). Providers must also comply with technical interoperability requirements.

International data transfers (Chapter VII DA)

In line with the GDPR, the Data Act regulates the transfer of data to third countries. Special requirements apply if a foreign court or authority has issued a decision ordering data to be transferred or made accessible (Art. 32 DA).

Data rooms and smart contracts (Chapter VIII DA)

In future, special interoperability requirements will apply to the use of so-called data rooms. Interoperability refers to the ability of multiple data rooms to exchange and share data. Participants in such data rooms must fulfil certain interoperability requirements and provide various information, e.g. on data set content, data quality, data formats, APIs, etc.

Special requirements also apply to the use of so-called smart contracts if these are used for the automated execution of data sharing agreements, for example (Art. 36 DA). They must be robustly designed to prevent functional errors and manipulation by third parties. In future, providers of smart contracts will be obliged to demonstrate compliance with the new requirements by means of an EU declaration of conformity.

National law implementing the Data Act

The implementation of the Data Act in Germany requires a national implementing law. By the date of application, the German legislator must create the sanctions regime provided for in the Data Act and appoint a supervisory authority to enforce the law (Art. 37 (1) sentence 1 DA). According to the draft bill of the German Act Implementing the Data Act (Referentenentwurf des Data Act-Durchführungsgesetzes, DADG-E), the Federal Network Agency (Bundesnetzagentur, BNetzA) shall act as the central supervisory authority for the enforcement of the Data Act (Section 2 (1) DADG-E). In addition, according to the Data Act, the national data protection authorities responsible for the GDPR shall also monitor compliance with the Data Act in the area of personal data (Art. 37 (1) sentence 1 DA). In Germany, this task will be assumed by Federal Commissioner for Data Protection and Freedom of Information (Bundesbeauftragte für den Datenschutz und die Informationsfreiheit, BfDI) (Section 3 DADG-E).

Implementation in German companies

The Data Act presents companies with considerable challenges. Companies affected by it will have to adapt their data architecture and, in some cases, their products, services and business processes in order to fulfil the new requirements. However, many German companies are still in the relatively early stages of implementation. According to a recent study by the industry association Bitkom, in less than 100 days before the deadline for its application only 1% of companies had fully implemented the requirements of the Data Act. A further 4% have largely or partially implemented them. At the same time, the new regulations also offer opportunities - particularly with regard to innovative data-related services or business models. For example, 43% of companies are already planning to actively offer their data on data markets in the future. This shows a growing awareness of the potential of the Data Act.

Companies that are affected by the new regulations or want to utilise the opportunities for innovative services and business models should (continue to) drive forward the implementation of the German Act Implementing the Data Act and closely monitor the further development of the implementing law.

We will provide more in-depth insights into the various regulatory areas of the Data Act in further articles in this series, in which we will analyse the most important areas of regulation in detail and show companies which implementation measures need to be taken and what needs to be considered.

Valentino Halim 

Back

6. The German Accessibility Act - more accessibility on the internet from the end of June 2025 

On 28 June 2025, the German Accessibility Act (Barrierefreiheitsstärkungsgesetz, "BFSG") came into force in Germany. This law is a significant step towards greater accessibility and inclusion in our society. The aim of the BFSG is to remove barriers for people with disabilities and enable them to participate in society on an equal footing. The BFSG implements EU Directive (EU) 2019/882 on accessibility requirements for products and services (European Accessibility Act) and obliges companies to make their products and services accessible. It particularly addresses e-commerce platforms and online shops, but also other digital offers such as websites, apps and e-books. The Act defines general accessibility requirements and also regulates specific aspects such as product labelling, market surveillance, administrative procedures and provisions on fines. The BFSG is supplemented by the Ordinance to the Accessibility Act (Verordnung zum Barrierefreiheitsstärkungsgesetz, BFSGV) of 22 June 2022, which sets out specific accessibility requirements for the individual products and services covered by the Act.

The BFSG applies to products that are brought onto the market after 28 June 2025 and to services that are provided after this date. However, there are exemptions for small and medium-sized enterprises (SMEs) in order to minimise the economic burden.

If they have not already done so, companies now urgently need to check whether they fall within the scope of the BFSG. If necessary, they must check the accessibility of their products and services and adjust them accordingly. Compliance with these legal requirements also offers the opportunity to tap into new target groups and sustainably increase customer satisfaction.

The end of the EU Commission's online dispute resolution platform

As it has no longer been possible to submit new complaints to the EU online dispute resolution platform (“ODR platform”) since 2 March 2025, it is being finally shut down on 20 July 2025. The basis for this is EU Regulation (EU) 2024/3228, which repeals the previous EU Regulation (EU) 524/2013. The ODR platform was originally created as a simple mediation tool for disputes between consumers and online traders, but has remained largely unused in practice. Traders who refer to the ODR platform in their legal notice, their general terms and conditions or in other places (e.g. in e-mail signatures) will have to remove these references as of 20 July 2025 if they are to avoid warnings for giving consumers misleading information. In future, online traders will therefore be required to inform their consumers about national dispute resolution bodies or other dispute resolution mechanisms.

Tobias Kollakowski

Back

7. Oppenhoff’s DPA checker

We develop innovative legal tech solutions to efficiently manage and accelerate standardised processes. This enables us to provide our clients with answers to recurring questions and situations quickly and with minimal effort. We would be pleased to work with you to develop customised solutions that are tailored precisely to your specific needs. In addition to applications for the intelligent creation of legal documents such as contracts, resolutions or forms, we also develop interactive applications that enable you to make an initial assessment of legal issues.

Our latest product is our so-called DPA checker, which we have developed to make the examination of data processing agreements (DPA) more efficient and precise. The AI-supported preliminary check of DPAs enables faster processing by automatically extracting and examining key contract content pursuant to the GDPR.

You will receive the results of the check in the form of a structured report that clearly documents the results and provides clear recommendations for action. The DPA checker also recognises unexpected clauses, such as limitations of liability or provisions on the reimbursement of costs. This not only speeds up the check, but also reduces errors and ensures standardised documentation of the results. The DPA checker is browser-based and data protection-compliant. If you wish, we would be pleased to adapt the check to your individual requirements.

Feel free to contact us!

[email protected]

Rocco Mondello

Back

Back to list

Dr. Daniel Dohrn

Dr. Daniel Dohrn

PartnerRechtsanwalt

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 441
M +49 172 1479758

Email

LinkedIn

Dr. Axel Grätz

Dr. Axel Grätz

AssociateRechtsanwalt

OpernTurm
Bockenheimer Landstraße 2-4
60306 Frankfurt am Main
T +49 69 707968 243
M +49 170 929 593 6

Email

LinkedIn

Valentino Halim

Valentino Halim

Junior PartnerRechtsanwalt

OpernTurm
Bockenheimer Landstraße 2-4
60306 Frankfurt am Main
T +49 69 707968 161
M +49 171 5379477

Email

LinkedIn

Dr. Marc Hilber<br/>LL.M. (Illinois)

Dr. Marc Hilber
LL.M. (Illinois)

PartnerRechtsanwalt

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 612
M +49 172 3808 396

Email

LinkedIn

Tobias Kollakowski<br/>LL.M. (Köln/Paris 1)

Tobias Kollakowski
LL.M. (Köln/Paris 1)

Junior PartnerRechtsanwaltLegal Tech Officer

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 423
M +49 173 8851 216

Email

LinkedIn

Christian Saßenbach<br/>LL.M. (Norwich), CIPP/E

Christian Saßenbach
LL.M. (Norwich), CIPP/E

Junior PartnerRechtsanwalt

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 115
M +49 151 1765 2240

Email

Annabelle Marceau

Annabelle Marceau

Junior PartnerRechtsanwältinSpecialized Attorney for Employment Law

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 347
M +49 172 4610 760

Email

LinkedIn

Rocco Mondello<br/>LL.M. (Köln/Florenz)

Rocco Mondello
LL.M. (Köln/Florenz)

AssociateBusiness LawyerLegal Tech Advisor

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 370
M +49 151 53784 295

Email

LinkedIn