Retail and Consumer Goods03.02.2021Cologne News

Artificial intelligence in customer service: legal framework conditions for chatbots in online shopping

Chatbots are text-based dialogue systems that are primarily used by companies to communicate with their customers. Advanced systems are based on so-called machine learning (generation of knowledge from experience) and thus belong to the field of artificial intelligence. They can be used both in apps and on websites of the respective company.

In 2017, a market research study by YouGov revealed that half of those surveyed had a positive attitude towards the use of chatbots. The decisive factor was that chatbots are able to answer questions quickly and independently of opening or consultation times, thus avoiding annoying waiting loops. This opens up far-reaching opportunities for e-commerce. Chatbots are increasingly capable of replacing the activities of "real" salespeople: the new generation of chatbots are already guiding customers through the entire customer journey, from advice and order processing through to after-sales support.

The use of chatbots triggers numerous legal issues, in particular concerning general civil law and consumer protection law, competition law, data protection law and even general personal rights. This short article addresses selected problem areas.

Consumer Law

If a customer concludes a contract entirely in dialogue with the chatbot, in the absence of the simultaneous physical presence of the contracting parties this constitutes a distance contract. In this case, the consumer protection information obligations under the Introductory Act to the German Civil Code [Einführungsgesetz zum bürgerlichen Gesetzbuch, EGBGB] apply.

In view of the limited presentation possibilities in the context of a dynamic dialogue, especially when using chatbots on mobile devices, it makes sense to rely on the exception in Art. 246a § 3 EGBGB and to limit the information to what is strictly necessary: the characteristics of the goods and services, the trader’s identity, the indication of the total price, terms in the case of continuing obligations and the details concerning the right of revocation. The additional information required may be provided by other means (e.g. by e-mail).

A disadvantage of this solution is said to be the triggering of subsequent information obligations, however these subsequent information obligations (i.e. confirmation of the conclusion of contract with reproduction of the content of the contract, which generally follows by e-mail) almost always exist anyway. Furthermore, the design obligations according to § 312i and § 312j (2) and (3) German Civil Code [Bürgerliches Gesetzbuch, BGB] must be observed: among other things, consumers have to be provided with a means to identify and correct input errors before placing their orders and buttons stating "order with obligation to pay" or with corresponding unambiguous wording must be used.

If the contract is concluded purely through a dialogue with the chatbot, the chatbot would therefore have to provide transparent information about the payment obligation, implement the further design obligations and also provide the information required under the EGBGB.

It is therefore advisable to link the chatbot to the existing shop system so that the customer can, after pre-selecting the products using the chatbot, complete the further conclusion of the contract in the shop system and find the required (product) information and legal texts in the usual place.

Liability issues

Chatbots are also an area in which the law lags behind the technological developments. The liability of systems that operate autonomously, i.e. ultimately also of virtual assistants, is a controversially debated topic of jurisprudence. A variety of liability scenarios are conceivable: for example, chatbots can cause damage by providing incorrect information or "false advice" or make mistakes during the contract conclusion. Companies could also be held liable for "statements" made by a chatbot that offend a customer. In addition, liability for breaches of competition or data protection is conceivable.

From a civil law perspective, the main issues are causality, fault and attribution. As a rule, it will not be possible to clearly determine when damage can be considered to have been caused by the chatbot. This is already a problem of today's complex IT systems. When assistants make decisions on their own, based on algorithms, vast amounts of data and the experience generated from them, it is possible that no one will be able to reproduce the error. Secondly, various (natural or legal) persons come into consideration as having caused the damage: the programmer behind the chatbot algorithm, the operator of the platform or online shop in which the chatbot is integrated or also the intermediary, if any.

Under the current regulations, liability on the part of the chatbot itself is not possible, since culpable action, i.e. at least negligence, requires foreseeability, which is not the case with artificial intelligence. An attribution of the actions of artificial intelligence applying the rules applicable to vicarious agents is also not possible, because § 278 BGB only applies to persons. Although the introduction of an "e-person" is under discussion, it is still far from becoming a reality.

In addition, various other liability models are under discussion: the behavioural model considers the robot itself to be a subject capable of transacting business and committing an offence. The attribution model, on the other hand, attempts to attribute the action of the robot to the company behind it by means of various legal institutions such as representation (§ 166 BGB), management without mandate (§ 677 BGB) or the vicarious agent (§ 831 BGB). Finally, liability regardless of fault for damages caused by chatbots by way of strict liability is also conceivable, as already exists in the area of product liability.

Where several parties are involved, complex "recourse chains" can also arise. Since it is often very difficult to clearly trace the causal chain, the legislator and the courts are called upon to ensure that the liability risks are distributed between all of the parties in a way that is in line with their interests.

Data protection 

The function of modern chatbots is based on the processing of personal data in real time, with it being understood that the more data that is regularly processed, the more complex the system used. Besides the name, addresses, identification numbers (e.g. customer/order numbers) and the content of the information exchanged with the chatbot, advanced systems access contextual information such as order history, the content of the shopping cart in an online store or even the location of the user's terminal device.

a) Legal basis required

According to the provisions of the General Data Protection Regulation (GDPR), the processing of such data is only permissible if the requirements of a suitable legal basis under data protection law have been met.

First of all, Art. 6 (1) sentence 1 lit. b GDPR comes into consideration as the legal basis. The provision allows processing operations that are necessary to perform a contract with the data subject or to implement pre-contractual measures. When chatbots are used in customer service, a contractual relationship with the respective user regularly already exists. If the chatbot initially is to be used to conclude the contract, the processing operations serve to initiate such a contract.

Whether the processing is necessary within the meaning of Art. 6 (1) sentence 1 lit. b GDPR always has to be examined on the basis of the circumstances of the individual case. The position of the European Data Protection Board (EDPB) is quite strict in this respect. One always has to determine in advance whether the controller has to process the respective data in order to be able to efficiently fulfil the core obligations arising from the contractual relationship, which are to be determined objectively. If the purpose of a chatbot is to efficiently fulfil clearly delimited main performance obligations of the entrepreneur (examples of this would be the receipt and handling of complaints about defects in a chat window on the website of an online shop), then this is likely to be the case.

In the case of more intelligent chatbots that perform a multitude of tasks, one must carefully examine which processing operations are actually still close enough to the core of the contract and which ultimately tend more to serve the interests of the company using them without primarily aiming at an efficient contractual performance.

In the case of complex chatbots that are able, for example, to correctly classify queries with the aid of extensive context data, there may also be a conflict with the principle of purpose limitation (Art. 5 (1) lit. c GDPR). This principle states that the purpose of a processing operation must be clear from the outset. In order to solve this problem, a broad interpretation of the principle is under discussion. However, such an approach would be difficult to reconcile with the position of the EDPB, which advocates strict compliance with the purpose limitation, especially with regard to Art. 6 (1) sentence 1 lit. b GDPR.

Overall, especially when using more complex chatbots, it will often be necessary to resort to the legal bases of legitimate interests (Art. 6 (1) sentence 1 lit. f GDPR) and/or consent (Art. 6 (1) sentence 1 lit. a GDPR). However, here as well, there is a potential conflict with the generally applicable principle of purpose limitation.

In certain cases, a consent requirement applies irrespective of the above considerations. According to the provisions of the German Telemedia Act [Telemediengesetz, TMG], this applies in particular in the case of access to information stored on the user's terminal device (e.g. location) or in the case of processing special categories of personal data (e.g. health data).

b) Exclusively automated processing according to Art. 22 GDPR

In addition to the requirements of the respective legal basis, the strict requirements of Art. 22 (1), (2) GDPR can apply. This is the case if the functionality of the chatbot provides that the respective user is subject to a decision based solely on automated processing.

A further prerequisite is that the decision must have legal effect vis-à-vis the user or significantly affect him in a similar way. This can be the case, for example, if the chatbot independently - for example, based on the customer's address - refuses to conclude a contract or offers a product at a higher price. Such fully automated decisions are only permitted if they are necessary for the performance of a contract, the data subject has consented thereto or they are otherwise expressly permitted by law. In most cases, only consent will be possible to implement in practice.

c) Transparency obligations

In order to comply with the transparency obligations according to Art. 13, 14 GDPR, it is advisable to clearly inform customers at the beginning of the chat that personal data will be processed when using the chatbot and to provide a link to more detailed data protection information. Any necessary consents should also be obtained at this point. The linked data protection information should then explain the relevant processing operations in more detail.

Here again, the problem with complex systems is that it may not yet be clear (in detail) which data are being processed for which purposes. Again, one can argue a broad interpretation of the purpose limitation principle. However, such an approach also harbours a degree of legal uncertainty at this point, as the supervisory authorities have so far taken a strict approach.

d) Data protection impact assessment

Before implementing a chatbot, one should always check whether a data protection impact assessment (DPIA) needs to be carried out. According to Art. 35 (1) sentence 1 GDPR, this is the case for processing operations that involve a particularly high risk for the data subjects. The Data Protection Conference [Datenschutzkonferenz, DSK], the central body of the German data protection supervisory authorities, has published a positive list of cases in which, in its view, there is a particularly high risk. Paragraph 11 thereof lists in general terms the "use of artificial intelligence to [...] manage interaction with data subjects [...]". The DSK cites "customer support using artificial intelligence" as a concrete example. Accordingly, it is advisable to conduct a DPIA before introducing more complex systems.

In the case of simpler structured chatbots, on the other hand, one certainly can argue that the characteristic of artificial intelligence is not fulfilled. If a company comes to the conclusion that a DPIA is not necessary, then the reasons for this must be documented, therewith fulfilling the accountability obligation to which all controllers are subject.

Conclusion

The use of chatbots brings with it many benefits and is also widely accepted by customers. However, a multitude of legal provisions have to be taken into account in their actual implementation. The current rules of liability and consumer law, in particular, are only compatible to a limited degree with automated customer services or the conclusion of contracts "per chat".

Data protection law is practically colliding with more complex systems based on artificial intelligence, that are able to access a variety of contextual data and independently work out possible courses of action through machine learning. Here, time will tell whether the supervisory authorities will actually insist on a strict interpretation of the GDPR.

With the help of well thought-out concepts, however, the risk of unpleasant legal surprises can be significantly minimised. Our experts would be happy to advise you.

 

Back to list

Dr. Hanna Schmidt

Dr. Hanna Schmidt

Junior PartnerAttorney

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 613
M +49 172 1475 126

Email

Marco Degginger

Marco Degginger

Junior PartnerAttorney

Konrad-Adenauer-Ufer 23
50668 Cologne
T +49 221 2091 365
M +49 162 1313 994

Email