Focus IT&C – 3rd quarter 2025

The regulatory landscape remains highly dynamic: In this issue, we provide a concise overview of current developments in IT law and data protection—from the implementation of NIS 2 and the new AI Regulation to guidelines on the interaction between the DSA and GDPR to the latest ECJ ruling on the EU-US Data Privacy Framework. Find out what companies need to know and consider now.

1. NIS 2 Implementation Act in parliament – will it still be implemented this year?

2. Implementation of the AI Act in Germany – draft bill of the Implementation Act published

3. Increased liability for AI systems: an outlook

4. Data transfers to the USA still permissible following ECJ ruling

5. Guidelines on the interaction between the DSA and the GDPR

1. NIS 2 Implementation Act in parliament – will it still be implemented this year?

The draft NIS 2 Implementation Act (NIS2UmsuCG) has now reached parliament. Following initial deliberations on 11 September 2025, the draft was referred to the committees for further consultation. Additionally, a statement on the draft bill has also been published by the Bundesrat. The NIS 2 Implementation Act is based on the German government's draft of 30 July 2025 (we have reported on previous versions of this [reference]). The requirements of the NIS 2 Directive are largely reflected in the revised German Act on the Federal Office for Information Security (Gesetz über das Bundesamt für Sicherheit in der Informationstechnik-Entwurf, BSIG-E). Existing industry regulations will continue to be governed by the respective "usual" laws, such as the German Telecommunications Act (Telekommunikationsgesetz, TKG) or the German Energy Industry Act (Energiewirtschaftsgessetz, EnWG). The aim of the implementation is to create uniform minimum standards for cyber security across Europe, thereby increasing the level of protection for critical infrastructures and for important and essential entities.

1.1 New exemption from the scope of application retained

The revised exemption from coverage in Section 28(3) BSIG-E is being retained (we reported on this on 14 July 2025 [reference]). According to the new version, companies that carry out an activity that is regulated are nevertheless not covered by the BSIG-E if this activity is classified as "negligible". It remains unclear when this is to apply in individual cases. However, according to the explanatory memorandum to the law, the following are to be considered:

"When classifying an entity as one of the types of entities listed in Annexes 1 and 2, business activities that are negligible in relation to the entity’s overall business activities can be disregarded. This prevents a minor secondary activity from being inappropriately identified as an important or essential entity in individual cases. Possible indicators for this assessment may include the number of employees working in this area, the turnover generated by this business activity or the balance sheet total for this area. An indication that the business activity is not negligible could be its mention in a partnership agreement, articles of association or a comparable founding document of the entity. The decisive factor here, taking into account all relevant indicators, is the overall picture of the business activity in question in relation to the entity's overall business activity."

Against this background, the Bundesrat emphasised in its statement (on Section 28(5) BSIG-E) that the term "negligible" should be understood more narrowly than a classification as a mere secondary activity. In this respect, it remains to be seen whether the legislator will include further clarifications in the explanatory memorandum during the legislative process.

1.2 Key terms are not defined

In addition to the lack of clarity as to when an activity can be considered negligible, the BSIG-E still contains other ambiguities.

Key terms such as "manufacturer" (Hersteller) and "provider" (Anbieter) are not legally defined in either the BSIG-E or the NIS 2 Directive. There is a risk of a considerably fragmented application of the law if these terms are not interpreted uniformly across the EU.

This also means that determining whether one is affected requires careful examination and often necessitates external advice.

1.3 Amount of fines remains unchanged

There have also been no changes as to the fines specified in the BSIG-E. A violation of the obligations under the BSIG-E can be punished with a fine of up to EUR 10 million (Section 65(5) BSIG-E). In addition, the fine for essential entities with a total turnover of more than EUR 500 million can rise to up to 2% of their total turnover (Section 65(6) BSIG-E).

1.4 Conclusion and outlook

The new law is expected to come into force by the end of 2025/beginning of 2026.

The entry into force of the new BSIG is therefore in sight. Companies would be well advised to already check whether they are affected by the new regulations and take appropriate measures. The BSIG-E provides for transitional periods only in isolated cases.

Christian Saßenbach

Back

2. Implementation of the AI Act in Germany – draft bill of the Implementation Act published

The Artificial Intelligence Regulation (AI Act) came into force in August 2024 and is now being implemented in stages. General provisions, such as those on prohibited AI practices and AI literacy, have already been in force since February 2025. Since 2 August 2025, the provisions on general purpose AI (GPAI) models and sanctions such as fines have also been in force.

2.1 Background

While the AI Act sets out the substantive rules for the use of AI systems within the EU, it is up to the Member States to clarify key issues relating to the establishment of supervisory authorities. By 2 August 2025, Member States are required to establish a market surveillance authority and a notifying authority. In addition, they must enact national regulations on sanctions such as fines and implement measures to promote innovation in the field of artificial intelligence.

In Germany, a law implementing the AI Act (DurchführungsG KI-VO) will be enacted for this purpose. On 11 September 2025, the Federal Ministry for Digital Transformation and Government Modernisation (Bundesministerium für Digitales und Staatsmodernisierung, BMDS) presented a draft bill that will serve as the basis for consultations with the federal states and associations. The DurchführungsG KI-VO in the version of the draft bill regulates the following key aspects:

Central role of the Federal Network Agency

The Federal Network Agency (Bundesnetzagentur, BNetzA) assumes central tasks as the market surveillance authority and notifying authority:

  • Market surveillance: The BNetzA is responsible for monitoring the AI Act. The market surveillance measures concern high-risk AI systems in accordance with Article 6 of the AI Act. Among other things, the BNetzA carries out random checks on AI systems, reviews technical documentation such as declarations of conformity and monitors tests on high-risk AI systems outside of test labs.
  • Notification: The BNetzA is also the notifying authority within the framework of the EU's New Legislative Framework (NLF). According to the AI Act, notification means the official communication to the EU Commission and the other EU Member States to the effect that a conformity assessment body has been assessed and designated as such. Accordingly, the BNetzA designates the bodies that verify the conformity of AI systems with the AI Act. The rules for the notification of conformity assessment bodies for high-risk AI systems (according to Annexes I and III No. 1 of the AI Act) have been in force since 2 August 2025.
  • Central point of contact: The BNetzA acts as the point of contact for the EU Commission, the Member States and the public. It is also the central complaints office for possible violations of the AI Act. Any natural or legal person may lodge a complaint with a market surveillance authority if they have reason to believe that there has been a violation of the provisions of the AI Act (Art. 85 AI Act).

2.2 Hybrid approach to the authority structure

In addition to the BNetzA, the draft DurchführungsG KI-VO names other market surveillance authorities for specific sectors where other market surveillance and supervisory structures exist. These authorities also take on the notification tasks in the respective sectors, insofar as this is stipulated in the draft.

  • Harmonised areas: The responsible authorities for the provisions set out in Annex I, Section A of the AI Act also take on the AI-specific tasks regarding market surveillance and notification. For medical devices or machinery.
  • Financial services: The Federal Financial Supervisory Authority (Bundesanstalt für Finanzdienstleistungsaufsicht, BaFin) is the market surveillance authority in cases where an AI system brought onto the market, put into service or used is directly related to the provision of financial services by a supervised financial undertaking.
  • Cyber resilience: For high-risk AI systems listed in Annex III No. 1 of the AI Act, the market surveillance authority designated in accordance with the Cyber Resilience Act is responsible. Until then, the Federal Office for Information Security (Bundesamt für Sicherheit in der Informationstechnik, BSI) will perform these tasks. The BSI also acts as the notifying authority.

The hybrid approach avoids the duplication of structures. Unless other (sector-specific) supervisory structures exist, the BNetzA remains the competent market surveillance authority.

2.3 Data protection authorities not the responsible bodies

During the legislative process, there was heated debate about which authority should be responsible for market surveillance and notification. Data protection authorities in particular were at the centre of the debate. The draft bill emphasises that the most uniform application possible of the AI Act is crucial for legal certainty, the uniform cross-border application of the law and the promotion of innovation. Distributing responsibilities among 17 data protection authorities would run counter to this goal. There is also a risk that data protection authorities would focus too heavily on data protection issues, which could hinder the optimal promotion of innovation. Finally, there is a shortage of AI specialists. It would be uneconomical for different supervisory authorities to compete for the available resources.

2.4 Establishment of a coordination and competence centre

The BNetzA will establish a coordination and competence centre (Koordinierungs- und Kompetenzzentrum, KoKIVO) for the AI Act. This centre will:

  • support and coordinate the work of the competent authorities,
  • ensure a uniform horizontal interpretation of the AI Act, and
  • promote the development of codes of conduct for the voluntary application of certain requirements to all AI systems (Art. 95(2) AI Act).

The supervisory authority's AI expertise will be pooled centrally at the KoKIVO and made available to other authorities as needed.

2.5 Promotion of innovation

The BNetzA will promote innovation through various measures:

  • Providing information and guidance on the implementation of the AI Act, in particular for SMEs and start-ups,
  • Carrying out awareness-raising and training measures
  • Promoting an exchange of knowledge and networking among the relevant parties
  • Establishing an AI real-world test lab that enables testing under real conditions and offers preferential access to SMEs and start-ups.

2.6 Sanctions

In the event of violations of the AI Act, the competent market surveillance authority may impose fines (Art. 95 AI Act). In such cases, the fine proceedings are generally governed by the German Administrative Offences Act (Gesetz über Ordnungswidrigkeiten, OWiG). In general, Sections 17 and 31 OWiG regulate the maximum amount and the statute of limitations for fines. These provisions do not apply to fines under the AI Act, as the AI Act conclusively regulates the framework for any fines. As is already the case with GDPR violations by public authorities, no fines can be imposed on public bodies such as authorities.

2.7 Outlook

The draft bill of the DurchführungsG KI-VO is now undergoing consultation with the federal states and associations, meaning that changes to the details are still possible. The results of the consultation and the final implementation act remain to be seen. Regardless of this, it is already clear that the BNetzA will play a central role in monitoring the AI Act. The planned one-stop shop of the supervisory authorities will doubtlessly be advantageous for companies. At the same time, specialised authorities will remain responsible in certain sectors, which makes particular sense for complex technical and industry-specific issues. A coordinated exchange between the authorities via the BNetzA will fulfil an important function here. We will

Valentino Halim

Back

3. Increased liability for AI systems: an outlook

The increasing prevalence of AI systems brings not only technological advances with it, but also new legal challenges. Although the AI Regulation ("AI Act") does not create any new civil liability for developers or users of AI systems, it does impose comprehensive requirements on the development and use of AI systems, especially high-risk systems, thereby influencing public opinion and the standard of care required when distributing and using AI. The regulations for high-risk AI systems under the AI Act predominantly apply from August 2026; by August 2027 at the latest they are to apply without exception. In addition, the liability regime for AI will be expanded from 9 December 2026 by the new Product Liability Directive, which is extending the product definition of the Product Liability Act to software, including AI systems.

3.1 Impact of the AI Act on the concept of a defect

The AI Act imposes a strict set of obligations on developers of high-risk AI systems in Articles 8 et seq. Articles 8 to 15 of the AI Act contain numerous requirements that must be observed when developing such systems. For example, a high-risk AI system must have a logging function (Article 12 AI Act) and a suitable human-machine interface (Article 14 AI Act). In addition, high-risk AI systems must be sufficiently robust to withstand so-called poisoning attacks, i.e. manipulation of the input data.

Operators should obtain contractual assurances from developers of high-risk AI systems that these requirements are met. If the above requirements are not met, then in the event of a clear contractual provision, the high-risk AI system does not comply with the agreed specifications, meaning that the user is entitled to warranty rights.

However, the requirements of the AI Act also affect the liability of AI system developers independently of contractual provisions, for an AI system that does not meet the product safety requirements of the AI Act often may not be used and would consequently be defective.

3.2 Product liability

The manufacturer of a product is liable under Section 1(1) sentence 1 of the German Product Liability Act (Produkthaftungsgesetz, ProdHaftG) if a person is killed or injured or an item intended for private use is damaged due to a defect in their product.

Under current law, it is controversial whether and to what extent software – and thus also AI systems – falls under the definition of a product pursuant to Section 2 ProdHaftG. Case law affirms that software has the characteristic of a product only if it has physical substance, for example by being embodied on a data carrier. The new Product Liability Directive now explicitly extends the definition of a product in Article 4(1) to include software and thus AI systems. This covers both AI systems that are embedded in other products, such as autonomous vehicles, and stand-alone AI systems. Only open-source software is excluded from the scope of application of the Product Liability Directive.

The starting point is the liability of the software manufacturer for damage caused by AI errors. However, the importer of AI systems is also liable if the software manufacturer is based outside the EU (Article 8(1)(c)(i) of the Product Liability Directive). This entails liability risks for EU distribution partners of manufacturers of AI systems in third countries.

The Directive also makes it easier for those harmed by AI systems to provide evidence and present their case: under Art. 9, the manufacturer must disclose evidence in its possession if the injured party has substantiated their claim for compensation. If the manufacturer fails to comply with this obligation, the defectiveness of its product is presumed in the injured party’s favour (Article 10(2)(a) of the Product Liability Directive).

3.3 Need for action

In view of the requirements of the AI Act for high-risk AI systems, which will apply from August 2026, and the Product Liability Directive, which must be transposed into national law by the Member States by December 2026, companies that develop, distribute or use AI systems should already establish effective AI governance. Experience has shown that close cooperation between the technical departments and the legal department is particularly crucial here, to ensure continuous legal support throughout the entire development process of AI systems – from their conception and development to their market readiness and distribution. Only by implementing clear processes and responsibilities can companies ensure compliance with all the regulatory requirements, in particular those for high-risk AI systems pursuant to Art. 8 et seq. AI Act. Otherwise, in addition to contractual warranty and liability claims, there is also the possibility of significant risks under the Product Liability Act.

Dr. Axel Grätz

Back

4. Data transfers to the USA still permissible following ECJ ruling

The General Court of the European Union dismissed an action for annulment brought by Philippe Latombe, a member of the French National Assembly, in its ruling of 3 September (Case T-553/23 – Latombe v Commission). The plaintiff sought a declaration to the effect that the EU-US Data Privacy Framework ("DPF") is invalid. The DPF currently allows European companies to transfer personal data from the EU and the EEA to the US without major obstacles. The General Court did not uphold this claim, justifying its decision, among other things, on the grounds that the redress procedures introduced in the US under the DPF guarantee sufficient impartiality and independence for affected EU citizens. The General Court has thus prevented a return to the days of great legal uncertainty surrounding data transfers to the US that were familiar in the past. However, it is already foreseeable that the European Court of Justice (ECJ) will also address the validity of the DPF again – with an uncertain outcome.

4.1 Background

Following the "Schrems II" ruling by the ECJ on 16 July 2020, transfers of personal data to the US on the basis of the EU-US Privacy Shield were declared inadmissible. The ECJ had already repealed the Safe Harbour Agreement with its "Schrems I" ruling on 6 October 2015. The abolition of the Privacy Shield made data transfers to the US significantly more difficult. Since then, companies have had to carry out a Transfer Impact Assessment (TIA) before each transfer. This involves assessing both legal and practical risks in the recipient country, with companies that transfer data possibly having to take additional technical protective measures.

On 10 July 2023, a new adequacy decision by the EU Commission for data transfers to the US came into force with the DPF. Previously, the US administration of the time had signed an executive order changing the US legal framework for intelligence services. This limits access to personal data by US intelligence services to what is necessary and proportionate for national security purposes, with the actions of these services being monitored by the Privacy and Civil Liberties Oversight Board (PCLOB) under the executive order. In addition, a new redress mechanism was created that allows EU citizens to approach an independent and impartial complaints body, the Data Protection Review Court (DPRC), in the event of data protection violations by US intelligence services.

4.2 The General Court ruling

In its ruling of 3 September 2025, the General Court dismissed Mr. Latombe's action against the DPF in its entirety. Latombe had argued that the agreement was incompatible with European fundamental rights and the General Data Protection Regulation (GDPR). In particular, he criticised what he alleged to be the DPRC's lack of impartiality and independence, as well as the fact that it effectively enabled the mass surveillance of EU citizens.

However, the General Court found no violation of the right to effective legal protection. It ruled that the DPRC established in the US was a sufficiently independent and impartial supervisory body – even though the DPRC had been created by executive order rather than by law.

The General Court was also unconvinced by the plaintiff's argument that the DPF ultimately enabled mass data collection by US intelligence services. Although US authorities could access the personal data of EU citizens without prior judicial authorisation, data collection was limited by precise legal requirements, restricted to legitimate purposes and subject to ex post monitoring by sufficiently independent bodies such as the DPRC.

4.3 Conclusion and outlook

It is good news for European companies that the DPF is to remain in force, as it provides a legally secure basis for the transfer of personal data to the US – an issue that affects almost all European companies. At the same time, the action underlying the General Court's ruling is unlikely to be the last attempt to overturn the DPF. Firstly, it is possible that Mr. Latombe will appeal against the General Court's decision before the ECJ. In addition, the NGO "none of your business" (NOYB), led by Max Schrems, the initiator of the two Schrems judgments of the ECJ, has already announced its intention to take action against the DPF as well. Depending on the type of proceedings chosen, the chances of success could indeed be higher than in Mr. Latombe's action, as the General Court was only permitted to take into account the date of the adoption of the adequacy decision underlying the DPF in its assessment of his action for annulment. Political decisions by the Trump administration, in particular the dismissal of several members of the PCLOB, were therefore not taken into account. We therefore continue to advise companies to closely monitor further developments.

Marco Degginger

Back

5. Guidelines on the interaction between the DSA and the GDPR

On 12 September 2025, the European Data Protection Board (EDPB) published guidelines on the interaction between the Digital Services Act (DSA) and the General Data Protection Regulation (GDPR).  The guidelines are intended to promote a uniform and consistent interpretation and application of the DSA and the GDPR, as some provisions of the DSA cover the processing of personal data by intermediary services and contain references to concepts and definitions contained in the GDPR. The guidelines focus in particular on data protection requirements in the fight against illegal content and the implementation of transparency obligations, in advertising and profiling activities, and in measures to protect minors.

In its role as an independent EU authority whose objective is to ensure the uniform application of data protection rules, the EDPB naturally takes a data protection-friendly position, which is also reflected in the general tone of the guidelines. Although the guidelines are not legally binding, they serve as an authoritative guide for practitioners, companies and authorities. They offer clear recommendations for action to reconcile the requirements of the DSA and the GDPR and minimise legal risks.

5.1 Relationship of the norms

To begin with, the EDPB makes it clear that the DSA is not to be understood as a lex specialis to the general provisions for the processing of personal data under the GDPR. Rather, both legal acts of the EU are on the same level in the hierarchy. Providers of intermediary services under the DSA therefore must also comply with the obligations of the GDPR when implementing the requirements of the DSA, and must implement the requirements of both legal acts in a compatible and coherent manner.

5.2 Processing of personal data under the DSA

Provisions of the DSA can serve as a legal basis for data processing under the GDPR. This is particularly the case when the DSA imposes obligations on service providers that require the processing of personal data within the meaning of Art. 6(1) sentence 1(c) GDPR. For example, Article 28 DSA may constitute such a legal basis for providers of online platforms, insofar as they take appropriate and proportionate measures to protect minors.

The processing of personal data under the DSA can be based on Article 6(1) sentence 1(f) GDPR if it is necessary to safeguard the legitimate interests of the controller or a third party. Here, the EDPB cites, for example, data processing in the context of voluntary investigations to detect, identify and remove illegal content in accordance with Article 7 DSA.

However, the fulfilment of a permission criterion under the GDPR does not release the service provider from compliance with the general principles of the GDPR when processing the data. In particular, the service provider must observe the principle of data minimisation pursuant to Article 5(1)(c) GDPR and comply with the information obligations pursuant to Articles 13 and 14 GDPR.

5.3 Transparency obligations

Both the DSA and the GDPR contain transparency provisions. In addition to the general provisions of the GDPR in Articles 12 et seq., the DSA provides for situation-related transparency obligations. For example, Article 14(1) and Article 15(1)(c) and (e) DSA contain additional transparency requirements for service providers regarding the policies, procedures, measures and tools they use for moderating content. Further transparency provisions can be found in Article 17 DSA, in the event of a restriction of the service, or in Article 26(1) DSA with respect to the presentation of advertising. Here, the transparency provisions complement each other and are not contradictory. However, the general transparency requirements of the GDPR on data processing are unlikely to be sufficient to meet the specific and situation-related transparency obligations of the DSA.

5.4 Automated decisions

Another area of tension is automated decisions (especially profiling), which are generally prohibited under Art. 22(1) GDPR. Service providers subject to the scope of the DSA often use such tools in practice – for example, in the context of investigations into illegal content (Article 7 DSA), when processing reports of illegal content (Article 16 et seq. DSA) or when using recommender systems (Article 27 DSA). In addition, Article 26 of the DSA contains separate provisions on the presentation of advertising, including a ban on the presentation of advertising based on profiling using special categories of personal data pursuant to Article 9 GDPR (cf. Article 26(3) DSA).

In Art. 22(2)(a) to (c), the GDPR provides three grounds for justification in which automated decision-making is permissible, including within the scope of the DSA. In addition to the consent of the data subject (lit. c.), the permissibility based on legal provisions (lit. b) is likely to be of particular practical significance. Here, too, the permissibility of automated decision-making under the GDPR does not exempt the service provider from its obligation to comply with the specific transparency requirements of the DSA (e.g. Art. 26(1) DSA) or the general transparency obligations of the GDPR on automated decision-making (Art. 13(2)(f) GDPR).

5.5 Protection of minors

A key objective of the DSA is to protect minors on the internet. According to Art. 28(1) DSA, providers of online platforms accessible to minors are obliged to take appropriate and proportionate measures to ensure a high level of privacy, security and protection for minors. Article 28(3) DSA clarifies that providers are not obliged to process additional personal data to determine whether the user is a minor. In this context, the EDPB recommends that providers of online platforms should especially avoid age verification mechanisms that enable the unambiguous online identification of the user. In addition, they should not estimate, verify or permanently store the specific age or age range of the user resulting from their age verification procedure.

5.6 Conclusion

The DSA and GDPR apply in parallel and complement each other. However, compliance with the requirements of one legal act does not automatically mean that the requirements of the other legal act are also met. With regard to transparency and information obligations in particular, it may be necessary to provide relevant information both from a data protection perspective and in accordance with the specific requirements of the DSA. However, the basic principles of the GDPR – such as lawfulness, transparency, purpose limitation, data minimisation, accuracy, storage limitation, integrity and confidentiality –also have to be observed in all events when implementing the requirements of the DSA.

The EDPB guidelines are now subject to public consultation. Interested parties have until 31 October 2025 to submit their comments.

Tobias Kollakowski

Back

Back to list

Dr. Jürgen Hartung

Dr. Jürgen Hartung

PartnerRechtsanwalt

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 643
M +49 172 6925 754

Email

LinkedIn

Dr. Marc Hilber<br/>LL.M. (Illinois)

Dr. Marc Hilber
LL.M. (Illinois)

PartnerRechtsanwalt

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 612
M +49 172 3808 396

Email

LinkedIn

Marco Degginger

Marco Degginger

Junior PartnerRechtsanwalt

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 365
M +49 162 1313 994

Email

Valentino Halim

Valentino Halim

Junior PartnerRechtsanwalt

OpernTurm
Bockenheimer Landstraße 2-4
60306 Frankfurt am Main
T +49 69 707968 161
M +49 171 5379477

Email

LinkedIn

Tobias Kollakowski<br/>LL.M. (Köln/Paris 1)

Tobias Kollakowski
LL.M. (Köln/Paris 1)

Junior PartnerRechtsanwalt

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 423
M +49 173 8851 216

Email

LinkedIn

Christian Saßenbach<br/>LL.M. (Norwich), CIPP/E

Christian Saßenbach
LL.M. (Norwich), CIPP/E

Junior PartnerRechtsanwalt

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 115
M +49 151 1765 2240

Email

Dr. Axel Grätz

Dr. Axel Grätz

AssociateRechtsanwalt

OpernTurm
Bockenheimer Landstraße 2-4
60306 Frankfurt am Main
T +49 69 707968 243
M +49 170 929 593 6

Email

LinkedIn