Focus IT&C – 1st Quarter 2024

We have compiled important and exciting new developments and case law on IT law and data protection for you. We hope you enjoy reading it!

 

1.  Apple's reaction to the DMA - important changes for users in the EU

2. Purchasing AI solutions - how to minimise risks

3. AI Act adopted: a look at the EU Parliament's revised version

4. Regional Court of Tübingen: first decision on cyber insurance

5. New unofficial draft bill on the implementation of the NIS 2 Directive published

6. BaFin publishes updated supervisory statement on outsourcing to cloud providers

 

1. Apple's reaction to the DMA - important changes for users in the EU 

The Digital Markets Act ("DMA") is to ensure greater fairness and competition in the digital sector of the European single market. In particular, the regulation aims to break down the abuse of mar-ket power by large online platforms, so-called gatekeepers. Since 6 March 2024, the gatekeep-ers addressed in the DMA must comply with its provisions. In order to avoid the high penalties stipulated in the DMA, Apple has had to intensively revise its own concept for iOS, Safari and the App Store. As a result, iOS version 17.4 has been available to users since 5 March, which con-tains highly relevant new features with regard to the App Store, particularly for software develop-ers in the EU.

1. DMA mandatory for gatekeepers from March onwards

The gatekeepers addressed by the DMA are companies that have been designated as gatekeepers by the EU Commission. According to Art. 3 DMA, their designation depends on three conditions: 

a) They must have a significant impact on the internal market; 

b) They must provide a core platform service, which is an important gateway to end users for business users; and

c) They must have a stable and permanent position with regard to their activities or their attainment of such a position in the near future must be foreseeable. 

Apple is such a designated gatekeeper and the App Store is a core platform service. The DMA regulates extensive behavioural obligations and provides for strict fines in the event of violations. Possible sanctions can amount to up to 10 % of the total turnover achieved worldwide, or even up to 20 % in the event of a repeat offence.

One of the obligations of gatekeepers is not to prevent products or services from being offered via other third-party online intermediation services or via their own online distribution channels at different prices or conditions than those offered via the gatekeeper's online in-termediation service (Art. 5 (3) DMA).

It must also be possible to install and use software applications in ways other than only via the App Store provided by Apple (Art. 6 (4) DMA). Apple may therefore no longer pre-vent the installation and use of an app store of a third-party provider. 

2. Apple's new concept: what will change for users in the EU?

For a long time, Apple refused to allow the app stores of other providers, primarily arguing with security concerns. The DMA has now forced Apple to act and it has presented its new concept in a press release on 25 January 2024. The resulting extensive changes to iOS, Safari and the App Store were implemented in an update on 5 March 2024.

Apps can now also be distributed on alternative app marketplaces, which should open up a new realm of possibilities for software developers. However, Apple is still trying to retain control and, for example, requiring the authorisation of marketplace developers. This will continue to prevent traditional "sideloading", i.e. the installation of an application from any source. This gave rise to criticism, whereupon Apple introduced the option of the "web distribution" of apps at the beginning of March, which is to be implemented this spring. Developers will then be able to distribute apps from their website without having to resort to a marketplace. However, here as well, Apple is also planning restrictive measures, such as the requirement of the uninterrupted membership of the Apple Developer Programme for at least two years and the publication of an app that has been installed more than one million times in the EU in the previous calendar year. In addition, Apple is planning certification for iOS apps, including those from alternative marketplaces or through web distribution. 

The commissions payable to Apple will change, but will not be cancelled. While these could originally reach as much as 30 %, they now amount to 10 % or 17 %. If the App Store’s payment processing is also to be used, an additional fee of 3 % will be charged. A new feature of the new fee structure is a so-called "core technology fee", which applies to apps from the App Store as well as to those from alternative app marketplaces or web distribution. If the threshold of one million downloads is exceeded, developers must pay Apple 50 cents for each additional first annual installation of the app via an Apple account. This fee will also apply to free installations and can therefore be very dangerous for small software developers whose free app suddenly becomes a great success. Software developers from the EU can choose whether to adopt Apple's new terms and conditions or continue with the original concept. In this case, there will also be no "core technology fee". Developers should therefore carefully consider which conditions make the most sense for them. Apple has also published a fee calculator for this purpose. 

However, it is not only the App Store that has undergone extensive changes. For example, other banking and wallet apps in the EU can now also use Near Field Communication ("NFC") technology. Previously, only Apple's own payment service Apple Pay could do this. It is now also possible for users to choose a default browser from a selection screen when starting the Safari web browser for the first time.

3. Is Apple acting in compliance with the DMA? What happens next?

The fact that Apple is implementing these changes out of necessity and not voluntarily is already evident in the introduction of the press release. There, the company highlights the new opportunities for malware, fraud and data protection risks in particular, which are intended to justify its measures for retaining control. However, many are dissatisfied with Apple's implementation of the DMA requirements, with 34 companies writing an open letter to the EU Commission at the beginning of March in which they criticised the "core technology fee" and the inadequate implementation of "sideloading" in particular.

Apple subsequently defended its approach to achieving DMA compliance in a hearing before the EU Commission. Margrethe Vestager, EU Commissioner for Competition, also expressed concerns about the "core technology fee". She fears that the new terms and conditions could be so unattractive that there will be no incentive to opt for them and take advantage of the DMA.

With its current strategy, Apple is attempting to mitigate the requirements of the DMA in certain aspects, but is increasingly being criticised for doing so. The EU Commission has now commenced competition proceedings against Apple as well as Meta and Alphabet to verify their compliance with the requirements of the DMA. We can therefore assume that Apple's implementation of the DMA is not yet complete and that further adjustments will follow.

Dr. Hanna Schmidt

Back 

2. Purchasing AI solutions - how to minimise risks

The EU's AI Act is about to be passed and is expected to be fully implemented in 2026. What questions have already arisen in anticipation of this?

1. Is it really an AI system?

Not everything that is marketed as AI is AI. The definition in Art. 3 (1) of the AI Act is based on the OECD definition, but is quite vague. The decisive factor is whether the system has a certain degree of autonomy and whether it infers the generation of results from the input it receives. In view of the fact that even an automatic windscreen wiper is "a little bit" autonomous and that quite simply all software is capable of generating the desired results from the input command, even these requirements are unclear. Recital 6 at least partially sheds light on the matter, clarifying, for example, that AI systems should not include systems based on rules that are defined exclusively by natural persons for the automatic execution of operations. Rule-based deterministic systems may appear and indeed be intelligent (and deliver unexpected results), but they are not AI systems within the meaning of the law.


Due diligence for AI systems is difficult due to a lack of access to (reliable) information. The provider should therefore make a binding declaration as to whether its solution is AI (within the meaning of the law), together with an indemnification in the event that the declaration is incorrect.

2. Is it a high-risk AI system?

Whether an AI system is a high-risk AI system is in some cases easier to determine, as Art. 6 exhaustively lists specific high-risk areas. Nevertheless, various questions remain unanswered, e.g. in Annex III No. 4 (b) on HR systems: "AI intended to (i) take decisions that affect the terms and conditions of employment-related relationships, promotion and termination of employment-related contractual relationships, (ii) assign tasks on the basis of individual behaviour or personal characteristics or attributes, and (iii) monitor and evaluate the performance and conduct of individuals in such relationships." What exactly this entails remains to be seen, especially as variant (iii) exactly mirrors the wording of Section 87 No. 6 BetrVG (German Works Constitution Act), which is known to be interpreted extremely broadly, at least by the Federal Labour Court. 

If the AI system fundamentally has to be categorised as high-risk, the important exceptions in Art. 6 (3), which were added to the law at the last minute in January, need to be examined.

3. What needs to be considered in contractual agreements?

In contracts for the purchase of AI systems (general software purchasing conditions should be adapted accordingly), it should be ensured that the AI system itself fulfils the requirements of the AI Act. The following should also be regulated:

  • The provider should confirm with binding effect whether or not the system is covered by the AI Act (see above).
  • The provider should be obliged to provide updates and to inform about the required technical and organisational measures as well as any changes.
  • It should be clarified who will carry out the (online) training relating to the specific system (if the provider does not also offer this). This should provide employees with the knowledge required for using the AI (Art. 4).
  • The AI system should have in-built monitoring (or interfaces to AI monitoring software).
  • The AI system should generate logs and archive them (otherwise, it must be ensured that they are archived elsewhere)

Finally, all aspects that are relevant for the purchase of "normal" software must be taken into account.

The above statements reflect the requirements for operators of high-risk AI systems. However, these measures are also advisable for all other AI systems.


What else do you need to consider?

Irrespective of the AI Act, the existing legal regulations continue to apply. You should ensure that the required training and input data is available and can be processed in accordance with the GDPR. You should ensure that the use of training and input data and the generation of results does not infringe the intellectual property rights of third parties. You should assign responsibilities for monitoring by natural persons within your AI governance structures.

And last but not least: is it wise to already consider these points today? Yes, it is! Because even legacy systems are subject to the AI Act in the event of significant changes. After all, the fundamental risks also exist for systems other than high-risk AI systems, such as a lack of transparency, uncontrollability or fears of job losses.

Dr. Marc Hilber

Back

3. AI Act adopted: a look at the EU Parliament's revised version

The EU Parliament adopted the AI Act on 13 March 2024. It now only needs to be confirmed by the European Council. The AI Act could therefore come into force in June 2024. Many provisions have been postponed in the EU Parliament's revised version. This makes taking a closer look at the updated text of the regulation all the more worthwhile.

The AI Act differentiates between prohibited systems with an "unacceptable risk" (see point 1), so-called high-risk systems, which make up the central regulatory object of the Act (see point 2), low-risk AI systems (point 3) and basic AI models (general purpose AI), which can be used in a variety of ways and therefore cannot be easily assigned to a risk class (see point 4).

1. Risk class "Unacceptable risk": 

AI systems with an unacceptable risk are prohibited under Art. 5 AI Act, including those that use manipulative techniques or exploit personal weaknesses and can therefore harm people (Recital 29 AI Act). These are, for example, AI systems for social scoring.

Exceptions apply for military, defence or national security purposes as well as for scientific research and development (Art. 2 (3) and (6) AI Act).

2. Risk class "High risk": 

High-risk AI systems in accordance with Art. 6 AI Act pose risks to health, safety and fundamental rights and are listed in Annexes I and III to the AI Act. Systems pursuant to Annex I are subject to the EU product safety regulations listed there, e.g. AI in toys, medical devices or protective equipment. Annex III lists further areas that lead to a categorisation as high-risk AI systems, such as AI in HR for job application processes. According to Art. 7 AI Act, the Commission may add further high-risk systems to Annex III.

2.1 Guidelines for designing and developing high-risk AI systems

Art. 8 to Art. 15 AI Act set requirements for the design and development of high-risk AI systems: 

  1. According to Art. 9 of the AI Act, the provider must identify and assess potential risks of the system and limit them through risk management measures (risk management system) in order to reduce the residual risks to an acceptable level. When applied to AI systems for employee management, company guidelines in particular, which regulate the authorisations and use of the systems, can reduce the risks of incorrect use (see Art. 9 (5) (c) AI Act).  
  2. According to Art. 10 AI Act, training data for high-risk AI systems must fulfil certain quality requirements. The data should be relevant, sufficiently representative and as error-free and complete as possible with regard to its purpose in order to prevent bias.
  3. Art. 11 to Art. 13 AI Act deal with technical documentation and record-keeping obligations. High-risk AI systems must be designed in such a way that they can be overseen by natural persons (Art. 14 AI Act). The supervisory measures depend on the degree of autonomy which, for fully autonomous systems, are limited to monitoring and controlling the output and may require the system to be shut down (Art. 14 (3) and (4) AI Act).
  4. According to Art. 15 AI Act, AI systems must be developed in such a way that they have an appropriate level of robustness against environmental influences such as new usage contexts, cyberattacks and errors within the algorithms used. This especially applies to AI systems based on artificial neural networks, as they can theoretically learn and adapt continuously. They are susceptible to incorrect data sets, which can lead to undesirable behaviour.

2.2 Obligations of suppliers, importers, distributors and operators of high-risk AI systems

Art. 16 to Art. 27 AI Act oblige providers, importers, distributors and operators of high-risk AI systems:

  1. The providers and operators of AI systems are subject to the most extensive obligations. Providers develop an AI system or GPAI model and place it on the market or put it into operation under their own name or brand, Art. 3 No. 3 AI Act. Operators use AI systems on their own responsibility as part of a professional activity, Art. 3 No. 4 AI Act. Providers are therefore generally developers of AI systems, while operators use third-party systems commercially. Operators may be subject to provider obligations if they subsequently affix their name or trademark to a high-risk AI system or significantly modify the system, Art. 25 AI Act.
  2. Providers must also take immediate corrective action if a high-risk AI system does not function in compliance with the AI Act. In this case, providers may be obliged to recall and deactivate their system, Art. 20 (1) AI Act. 
  3. Similar to Art. 27 GDPR, providers of high-risk AI systems established outside the EU must appoint an authorised representative established in the Union, Art. 22 AI Act. 
  4. Art. 23 AI Act obliges importers of high-risk AI systems, in line with the dual control principle, to check the implementation of various obligations, such as the documentation obligation under Art. 11 AI Act, before placing the system on the market. This inspection is supplemented by a further inspection of distributors, Art. 24 AI Act. 
  5. Operators must continuously monitor and check the high-risk AI systems, see Art. 26 AI Act. On the one hand, they must take appropriate technical measures to be able to operate the system safely and, on the other hand, provide the intellectual resources to enable the responsible persons to supervise it (Art. 26 (1) AI Act). In addition, operators are obliged to check the quality of their data to ensure that it is relevant and representative for the system's area of application (Art. 26 (4) AI Act). 
  6. Operators of AI systems used to assess creditworthiness or pricing in insurance companies must carry out an impact assessment of fundamental rights in accordance with Art. 27 of the AI Act. This is intended to identify the specific risks to fundamental rights, in particular possible violations of the principle of equality (Art. 3 of the German Constitution [Grundgesetz, GG]).

2.3 Special rules for notifying authorities and certificates

Art. 28 to Art. 39 AI Act are aimed directly at notifying authorities and bodies that use high-risk AI systems. Furthermore, Art. 40 to Art. 49 AI Act contain regulations on conformity assessments, certifications and registrations for these high-risk AI systems.

3. Risk class "Low or minimal risk" 

If AI systems are not categorised as high-risk AI systems or are prohibited, these are AI systems with low or minimal risk. According to Art. 95 AI Act, the AI Office and the member states are to promote and facilitate the establishment of codes of conduct on voluntary compliance with certain provisions of the AI Act. Transparency obligations also apply to AI systems that communicate with natural persons, Art. 50 AI Act. The user of the AI must be informed of the fact that they are interacting with AI.

4. GPAI models

The new version of the AI Act introduces the category of general purpose AI models ("GPAI models"), see Chapter V of the AI Act. These models, such as the GPT models on which ChatGPT is based, are designed for general purposes and, according to the AI Act, serve as the basis for the development of AI systems. Only through integration with components such as a user interface or implementation in other systems does a GPAI model become an AI system, cf. Recital 97 AI Act. Due to their versatility and potential for further development, their applications can harbour different risks.

  1. The transparency obligations under Art. 50 AI Act and special obligations for their providers under Art. 53 AI Act apply to GPAI models. When using this AI, natural persons must in particular be able to recognise that they are interacting with AI.
  2. GPAI models may also be categorised as high-risk AI systems as a result of their being implemented in another AI system, see Recital 85 AI Act. The corresponding extended programme of obligations therefore applies to them. 
  3. In addition, Art. 51 AI Act addresses "GPAI models with a systemic risk", the use of which triggers additional obligations to minimise the risk in accordance with Art. 55 AI Act. The categorisation is mainly based on whether these models have capabilities with a high impact, but can also be based on a decision by the Commission if it considers a model to be equivalent thereto. The obligations of GPAI model providers under Art. 53 AI Act include, for example, the creation and updating of technical documentation, a summary of the data used for training and the introduction of a copyright policy.
  4. Providers of GPAI models with a systemic risk must carry out model evaluations, assess or mitigate systemic risks and have an appropriate level of cybersecurity and physical infrastructure in accordance with Art. 55 AI Act.

Dr. Axel Grätz

Back

4. Regional Court of Tübingen: first decision on cyber insurance

Cyber insurance is becoming increasingly relevant due to the growing threat of cyberattacks and the associated financial risks. According to the "Allianz Risk Barometer", cyber incidents were considered the biggest business risks for companies worldwide in 2022 and 2023 (in 2023 on a par with business interruption). Cyber insurance is a relatively new product, which is why several legal uncertainties still exist in this regard. Thus it was unclear, for example, whether the case law handed down in relation to other lines of insurance could also be applied to cyber insurance. On 26 May 2023, the Regional Court [Landgericht, LG] of Tübingen issued the first ruling on cover for cyber insurance (docket No. 4 O 193/21). This provides first insights: 

1. The case

The case concerned the reimbursement by the cyber insurer of various losses suffered by the policyholder as a result of a cyberattack. The attack was a so-called "pass-the-hash" attack, in which an encryption Trojan (so-called ransomware) had infiltrated the system via an opened attachment to a phishing email. This encrypted several of the policyholder's servers, paralysing the entire IT infrastructure. The policyholder did not comply with the attacker's ransom demand, with the result that the encryption of the IT infrastructure remained in place. As a result, the IT infrastructure had to be rebuilt.

The insurer argued that the policyholder had objectively and fraudulently answered the risk questions incorrectly. This constituted a breach of the pre-contractual obligation of disclosure (Sections 19, 21 of the German Insurance Contracts Act [Versicherungsvertragsgesetz, VVG]). The policyholder had not provided 11 of its 21 servers with the latest security updates. The implementation of such updates had been one of various risk questions of the insurer in the run-up to the conclusion of the contract. The insurer therefore declared the rescission of the insurance contract, citing the general terms and conditions of insurance.  

In the alternative, the sued insurer pleaded an aggravation of risk (Sections 23 et seq. VVG) and gross negligence on the part of the policyholder in bringing about the insured event (Section 81 (2) VVG) due to a lack of or inadequate security measures. As possible measures, the insurer mentioned two-factor authentication and the monitoring of the IT system by employees or similar measures that can prevent cyberattacks.

2. The court’s decision 

With regard to the rescission of the insurance contract, the court ruled that although the risk question regarding the security updates may have been answered incorrectly, insurance cover was not to be denied due to so-called causality counter-evidence pursuant to Section 21 (2) sentence 1 VVG. This was the case because it had been proven that a possible incorrect answer to the risk questions was not the cause of the occurrence of the insured event or the determination or scope of the insurance benefit. An expert opinion established that the cyberattack had exploited a known vulnerability in Windows and that the attack was also successful on servers that had had the necessary security updates. Such counter-evidence of causality only fails in the event of a fraudulent breach of the duty of disclosure, which the court did not assume, however. The Regional Court justified this by stating that, at an event prior to the conclusion of the contract, the insurer had given the impression that it did not place any particularly high demands on the policyholder's IT security. 

An exclusion or reduction of the claim due to an increase in risk after conclusion of the contract was negated by the court in view of the counter-evidence of causality brought. 

As regards the grossly negligent causation of the insured event and a resulting reduction of the claim, the court argued with a judgement of the Higher Regional Court of Hamm of 18 May 1988 (case No. 20 U 232/87). If the risk situation already existed when the contract was concluded and therefore was or could have been the basis of the risk assessment, Section 81 (2) VVG did not apply. The insurer would therefore have had to clarify the existence of additional security measures itself by asking suitable risk questions. By waiving such questions, the insurer accepted the policyholder and its existing risk situation. The insurer cannot subsequently impose some of the risks that existed from the outset on the policyholder via Section 81 (2) VVG. Measures to be taken should therefore have been part of the contract prior to the conclusion of the contract.

3. Conclusion

Following the judgement of the Regional Court of Tübingen, the previous case law concerning other classes of insurance is to be applied to cyber insurance. It is nevertheless doubtful whether the judgement will stand in its current form. The defendant has lodged an appeal against the decision with the Stuttgart Higher Regional Court (docket No. 7 U 262/23). The Tübingen decision has already been criticised in the legal literature on insurance law.

From the insurance industry’s perspective, the Regional Court's comments on the grossly negligent causation of the insured event pursuant to Section 81 (2) VVG are particularly problematic. According to the court's interpretation, the provision forces the insurer to conduct an extensive clarification of the risks and allows the policyholder to behave completely carefree if no agreements have been concluded on a risk situation that existed prior to the conclusion of the insurance contract. This contradicts the character of Section 81 VVG. It is predominantly assessed as a subjective exclusion of risk if the policyholder has acted in a reproachable manner. This also explains the systematic categorisation of the provision in the law, according to which the causation of the insured event is not regulated in the section on statutory obligations (Sections 19 - 32 VVG).

It therefore remains to be seen how the appeal court will decide. However, we can assume, especially if the decision is also upheld on appeal, that the insurers' risk assessment will become even more stringent. To be on the safe side, insurers should scrutinise whether their risk questions in the pre-contractual risk assessment are sufficient. Measures to be taken with regard to existing risk situations should be made part of the contract. From the insurer's point of view, these measures can minimise the risk of a court assuming that the insurer accepted the policyholder with his existing risk situation.

Dr. Hanna Schmidt

Back

5. New unofficial draft bill on the implementation of the NIS 2 Directive published 

On 14 December 2022, the European Union adopted the NIS 2 Directive ((EU) 2022/2555) to address increasing cyber threats, ensure the functioning of the internal market and protect the reliability of supply chains. This Directive aims to overcome the existing fragmentation in the internal market in the area of cybersecurity and to establish a uniformly high level of cybersecurity throughout the European Union.

Initial ("unofficial") drafts and a discussion paper on the German NIS 2 Implementation Act were published over the course of last year, which we have already reported on (see [ITC 3rd quarter, ITC 4th quarter]). An updated, "unofficial" draft bill dated editing status 22 December 2023 has now been available since 7 March 2024.

This updated draft bill contains numerous provisions from the previous discussion paper, but offers a more comprehensive picture of the planned legislation. In addition to the upcoming amendments to the German Act on the Federal Office for Information Security [Gesetz über das Bundesamt für Sicherheit in der Informationstechnik, BSIG], it also includes upcoming amendments and adjustments to other laws, such as the German Energy Industry Act [Energiewirtschaftsgesetz, EnWG] and the German Telecommunications Act [Telekommunikationsgesetz, TKG].

Relevance of cybersecurity at management level

For the management, it is important that the provision under Section 38 (2) of the draft bill (BSIG-E) prevents companies from waiving recourse claims against the management if the management violates its IT security obligations under Section 38 (1) BSIG-E. This provision emphasises the increasing importance of cybersecurity at management level (cybersecurity is the boss’s responsibility). We have already responded to these requirements and, together with our partner ENUR, offer special training for the management so that it can optimally fulfil its upcoming obligations and minimise personal liability risks.

Specific requirements for risk management measures

It is also worth mentioning that the legislator appears to be positioning itself to the effect that the risk management measures to be taken pursuant to Section 30 BSIG-E should not only be limited to critical utility services, but should also include large parts of IT. According to Section 30 BSIG-E, "(...) institutions (...) are obliged to take appropriate, proportionate and effective technical and organisational measures to prevent disruptions to the availability, integrity, authenticity and confidentiality of the information technology systems, components and processes they use to provide their services and to minimise the impact of security incidents to the extent possible (...)". The legislator states in this respect that the term (note: services) should not be confused with the provision of critical utility services. Rather, the services meant here are all activities of the organisation for which IT systems are used. This also includes, for example, office IT or other IT systems operated by the institution (updated, "unofficial" draft bill, p. 124 et seq.). 

There are also more detailed explanations of the type and scope of security to be guaranteed in the supply chain. This includes, among other things: contractual agreements with suppliers and service providers on risk management measures, dealing with cybersecurity incidents, patch management, consideration of recommendations of the BSI in relation to their products and ser-vices as well as encouraging suppliers and service providers to comply with fundamental princi-ples such as Security by Design or Security by Default (updated, "unofficial" draft bill, p. 125 et seq.).

Outlook - what needs to be done?

If companies fall within the scope of the NIS 2 Directive, they need to address the consequences in good time as there are currently no transitional periods. We assist you on the question of the applicability of the NIS 2 implementation to your company (you can also use our free NIS 2 check to initially check whether the NIS 2 regulations are likely to apply to you: NIS-2-Check) and provide cybersecurity advice to companies and corporate groups on all legal questions arising in their day-to-day business. We also offer comprehensive training courses that go beyond the legal issues, including extensive cover of the technical side and operational risk management. 

Christian Saßenbach

Back

6. BaFin publishes updated supervisory statement on outsourcing to cloud providers

On 1 February 2024, the German Federal Financial Supervisory Authority [Bundesanstalt für Finanzdienstleistungsaufsicht, BaFin] published an updated supervisory statement on the requirements for outsourcing to cloud providers. The supervisory statement contains important clarifications with regard to the practical difficulties of negotiating contracts between supervised companies and cloud providers. In addition, the supervisory statement contains detailed technical requirements for developing applications in the cloud as well as for monitoring and control of services outsourced to cloud providers. Finally, the supervisory statement, on each of the covered topics, provides an outlook on the provisions of the Digital Operational Resilience Act, which will be applicable from mid-January 2025.

The statement is based on the BaFin’s guidelines on outsourcing to cloud providers of November 2018 and applies to all supervised companies in the financial sector, including credit institutions, financial services institutions and insurance companies. In substance, the supervisory statement applies to the known types of cloud services, i.e. in particular Infrastructure as a Service (IaaS), Platform as a Service (PaaS) and Software as a Service (SaaS). 

1. Specified requirements for the contractual provisions with the cloud provider

The most important part of the supervisory statement with respect to legal practice is its section on drafting contracts with cloud service providers. This is because negotiating contracts between supervised companies and service providers is often challenging. While supervised companies must – due to statutory provisions on outsourcing (in particular in accordance with Section 32 of the German Insurance Supervision Act [Versicherungsaufsichtsgesetz] and Section 25b of the German Banking Act [Kreditwesengesetz]) – retain sufficient control over the provided services (e.g. by issuing instructions), modern cloud providers rely on a high degree of standardisation and regularly wish to avoid granting special rights to individual customers. Therefore, legal clarity regarding the catalogue of obligations is of enormous importance for supervised companies. Against this background, the supervisory statement contains the following key clarifications:

  1. In contrast to the original guidelines from November 2018, the supervisory statement expressly emphasises the applicability of the so-called proportionality principle for all topics mentioned in the statement. This allows for a higher degree of flexibility, particularly for agreements on non-critical or non-material outsourcing with a lower risk profile, in which a softening of individual contractual rights vis-à-vis the provider may be justifiable if there is no risk of jeopardising the interests of the customers of the supervised company.
  2. The statement still requires that information and audit rights must not contractually be restricted and clarifies that such restriction may not be imposed by cloud providers "through the back door" by means of internal implementation guidelines or through high costs of implementation. In the future, supervised companies will have a better negotiating position against such popular practices by cloud providers.
  3. The supervisory statement clarifies that instructions from the supervised company can also be issued by technical means using a management console or APIs. Accordingly, in some cases – depending on the individual case – it is not necessary to include a universal contractual right of the regulated company to issue instructions in the outsourcing agreement.
  4. It specifies further "good causes" for extraordinary termination by the supervised company. According the statement, the supervised company should, inter alia, be able to terminate for good cause without notice in the event of breaches by the cloud provider "with regard to the outsourced matter against applicable law, legal provisions or contractual provisions" or in the event of "significant changes [...] that affect the outsourcing agreement or the cloud provider (e.g. further outsourcing or changes to subcontractors)". The wording of all the additional causes for termination mentioned in the supervisory statement is very broad, which is likely to lead to discussions in negotiations despite the clarified supervisory statement.
  5. With regard to clauses on choice of law, the supervisory statement clarifies that when choosing the law of a country outside the European Economic Area, "all requirements for enforceability of the law [should] still be guaranteed". This is likely to make the choice of the law of such a third country unattractive, as the enforceability of the contract must in each case be assessed previously.

2. Secure development of applications and IT operations in the cloud: new tips

The supervisory statement contains an entirely new section on secure application development and secure IT operations in the cloud, which is very detailed and highly technical. It provides that, when developing applications in the cloud, the supervised company is obliged to continuously analyse existing risks in consideration of existing regulatory requirements and – depending on the individual case – to take technical and/or organisational measures to mitigate such risks. The supervisory statement, inter alia, addresses the following topics with regard to the development of applications in the cloud:

  1. Observation of best practices issued by cloud providers for application development and IT operations as well as for the documentation of security settings. These should be compared with the supervised company's own (architectural) specifications.
  2. Technical implementation of the supervised company's own (architectural) specifications, e.g. with regard to the use of encryption/permissible cryptographic procedures and the separation of production environments from development, test and other environments. If technical implementation is not possible, the subsequent risks should be documented and managed as part of the risk management.

The section also contains specifications on cyber and information security, emergency management and the exit strategy to be developed. In particular, the supervisory statement addresses measures (i) to secure network connections against disruption and unauthorised monitoring, (ii) to prevent intrusion by third parties or the expansion of unauthorised access and (iii) to enable administrative access even if the primary connection paths and end devices are disrupted.

3. Monitoring and control of outsourced services

The supervisory statement contains a new section on monitoring the cloud provider's services. The supervised company should take appropriate, risk-oriented, technical and procedural precautions in order to be able to collect, analyse and evaluate the information required for monitoring in a timely, complete and comprehensive manner. The supervisory statement addresses, in particular, the following topics:

  1. Ensuring that the cloud provider makes available all information required for monitoring in a suitable format, which can be accessed by the supervised company.
  2. The possibility of an ad hoc plausibility check of the corresponding data through suitable analyses or measurements. For ongoing monitoring, the supervised companies should specify internal processes and threshold values for warning levels defining when an unacceptable level of service quality and information security in approached.

The requirements specified in this section are also relevant for the contract between the supervised company and the cloud provider. The parties are well advised to agree on sufficiently detailed service level agreements (SLAs) that include meaningful reporting on service quality in compliance with the BaFin requirements.

4. Outlook on the DORA regulations

Finally, the supervisory statement provides an outlook on the requirements for contractual agreements for the use of information and communication technologies (ICT) between supervised entities and third-party ICT service providers, which are governed by the Regulation of the European Parliament and of the Council on Digital Operational Resilience for the Financial Sector (Digital Operational Resilience Act – DORA). DORA came into force on 16 January 2023 and will be directly applicable from 17 January 2025 onwards.

Supervised companies should review their processes and existing outsourcing agreements and adapt them if necessary, at the latest when the requirements of DORA take effect.

Marco Degginger

Back

 

Legal Tech Tools - Digital applications for more efficient solutions

Discover our extensive range of legal tech tools! Learn more ...

Back to list

Dr. Jürgen Hartung

Dr. Jürgen Hartung

PartnerAttorney

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 643
M +49 172 6925 754

Email

LinkedIn

Dr. Marc Hilber<br/>LL.M. (Illinois)

Dr. Marc Hilber
LL.M. (Illinois)

PartnerAttorney

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 612
M +49 172 3808 396

Email

LinkedIn

Michael Abels

Michael Abels

PartnerAttorney

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 600
M +49 172 2905 362

Email

Dr. Angela Busche<br/>LL.M. (CWSL)

Dr. Angela Busche
LL.M. (CWSL)

PartnerAttorney

Am Sandtorkai 74
20457 Hamburg
T +49 40 808105 152
M +49 173 4135932

Email

Marco Degginger

Marco Degginger

Junior PartnerAttorney

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 365
M +49 162 1313 994

Email

Tobias Kollakowski<br/>LL.M. (Köln/Paris 1)

Tobias Kollakowski
LL.M. (Köln/Paris 1)

Junior PartnerAttorneyLegal Tech Officer

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 423
M +49 173 8851 216

Email

LinkedIn

Patrick Schwarze

Patrick Schwarze

Junior PartnerAttorney

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 406
M +49 1520 2642 548

Email

LinkedIn

Dr. Axel Grätz

Dr. Axel Grätz

AssociateAttorney

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 604
M +49 170 929 593 6

Email

LinkedIn

Christian Saßenbach<br/>LL.M. (Norwich), CIPP/E

Christian Saßenbach
LL.M. (Norwich), CIPP/E

AssociateAttorney

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 115
M +49 151 1765 2240

Email