Between the GDPR, the AI Act, the DSA and Common Decency
Recently, I’ve been trying to come up with some legal advice on how one could implement a chatbot without stepping on any of the landmines in the European regulatory minefield. In my process, I always like to look at “How the others do it”, however, when it comes to this particular question, this was quite a disappointing avenue to take. It appears that when it comes to the technology now implemented on every third website and offering advice on everything from law to cooking, there is just not that much written on how the bots should be implemented. And when I say should, I don’t mean the best cybersecurity and security tips or the best fine-tuning or training tips. I mean how, when and for what can I use a bot, which data can I use to train it, as well as how and what information do I need to share with the users so that my bot is at least somewhat compliant with most of the European regulations? (European, just to keep things relatively simple, however, most of the suggestions I make should also be considered as a matter of common decency towards the users.)
Anyway, so I sat down and did what I normally do when confronted with a legal mess. I wrote things down.
Implementing a ChatBot 101
1. Choosing a Chatbot
As simple as this one may sound, it is far from a trivial question. The options are manifold and include choosing to build your own chatbot using open-sourced code.[1] Using one of the gazillion chatbot APIs offered on the market, that allow you the simplest and quickest ready-set-go set-up.[2] Finetuning your own chatbot based on one of those APIs.[3] Finetuning your chatbot using various chatbot tools.[4] Or just paying someone to do it all for you by opting for a Chatbot as a Service. [5]
Choosing any single one of these options doesn’t come without its ripple effects. And these ripple effects of course include performance and flexibility in setting up the bot, but also the particularities of conforming with legal obligations. So for instance, developing your own bot from scratch or relying solely on open-sourced code is definitely the safest option data protection-wise, as you control all the training data and the data isn’t flowing anywhere else. However, this is not without its downsides and one should only jump into this frying pan if one has enough expert resources to set the thing up and running while guaranteeing a certain level of performance. Conversely, relying on APIs always entails a certain level of risk of possible data leakage. Not to mention you rely on someone else’s performance and are at least in the first line responsible for their mistakes as well (GDPR joint-controllership alert). The situation of course getting even more complex when yet another tool is used for finetuning for instance.
The simplest option then probably turns out to be leaving the mess to someone else and just buying the product or rather service. However, aside from being the most expensive way to go (especially if you want a highly personalized bot), this option also has its pitfalls and one should then choose which particular bot to hire VERY carefully while taking into consideration all publicly shared information on the data processing practices, training data used etc. Or again land back in the fire for failing to comply with due diligence obligations.
2. Fine-tuning a Chatbot
Once you’ve chosen your bot, and presuming you’ve chosen an option including some finetuning from your side, congratulations! You just jumped from the frying pan straight into the fire. Regardless of whether you use one of the tools for automated finetuning or you take open-sourced code, roll up your sleeves and get the hands dirty yourself, which data you feed into the model is just as important as choosing the model.
We are all already familiar with the whole garbage-in-garbage-out agenda, but there is another maybe more important agenda to be considered. And that is the legally-problematic-stuff-in-non-negligible-risk-of-legal-action-out. We already familiarized ourselves with this concept through the lawsuits of artists and newspapers against the biggest LLM providers. And the very likely scenario is that once the legal situation has cleared up there, the lawsuits may proliferate to anyone 1. Using their products or services and 2. Doing a similar thing. The key takeaway being, of course, to keep track of the legal developments in the field and to not feed your model with (likely) unlawful data. We can also add one bonus takeaway to this, avoid feeding your model personal data at all times. Aside from the copyright debate for a second, using personal data where not absolutely necessary will always get you into trouble.
One final possibility and potential problem to consider is that nowadays you don’t even need to finetune your model. You can continually finetune it so to say, by performing further API calls or website calls where you can fetch the data for the bot’s responses. If that is the case, make sure to respect any limitations to the use of data imposed by the original website provider. These limitations can come in the form of robots.txt files but also just be stated in their Terms and Conditions. Yes, even crawling and linking has its limits.
3. The Disclaimers
If there is one thing that law experts cannot get enough of that is ‘disclaimers’. So make sure to implement a fair number of those together with your chatbot. Two absolute non-negotiables being that the person interacting with an AI system needs to be made aware of the fact before they can even interact with it, as well as of the fact that outputs can be inaccurate and shouldn’t be relied upon. These two can be nicely packed together in the form of a pop-up, but should also remain continuously visible somewhere on the website or the user could be repeatedly reminded of their existence. Better overly transparent than sorry applies here.
And the same goes for the privacy notice, the whole notice itself being a sort of disclaimer. Although the workings of a large language model require a computer science degree to be somewhat understandable, you are still required to try and make them understandable within the limited scope of the privacy notice. Imagine explaining what the model does to your six-year-old or maybe your grandparents and take it from there. Pictures, videos and graphics are most welcome. On the other hand, if you are using any of the APIs or automated tools mentioned in Step 1, you are of course free to link the privacy notices of the relevant service provider(s), but that still doesn’t mean you’re off the hook. In this particular context, you are the one offering the service and being the first contact point for questions and complaints. Therefore, it is your responsibility to explain where the users’ data is flowing, why that is necessary and how they can stop the processing. And this again requires some skill as well as creativity, in order to be done transparently and adequately. Good luck cracking your brains over that one!
4. The Outputs
Now we finally made it to the outputs, so surely we must be approaching the end. If you were thinking that, you were correct! Well at least somewhat. This one still is a whole separate mountain to climb. And apart from the already mentioned disclaimer, stating that the results might be incorrect, there are a couple more things to consider, because there are multiple reasons for the possible incorrectness. The first one is of course the infamous hallucinations of LLMs, due to their inherent lack of understanding of the data we so graciously feed them. And, besides praying that some very smart people figure out how to fix that, there is not much else we can do about the issue other than implementing our disclaimer.
On the other side of the coin, however, we have something different, which will apply to all chatbots crawling other websites to find and output information. So now you have to ask yourself what happens if the scrapped information is false or even illegal. For situations like these, it might be best to rely on the so-called Hosting exception contained in Article 14 of the now already ancient e-commerce directive. This exception, which also applies to search engines for example, guarantees that hosts and intermediaries are not liable for the content they simply provide access to. This, however, only applies if it wasn’t obvious that the content was unlawful. So, to maximally simplify this down. First, only crawl and scrape trustworthy information sources you checked beforehand (don’t try and play Google). Second, make sure to integrate references in all your chatbot’s outputs, so the original sources for all information are immediately visible.
One last thing worth considering and putting some extra coding hours into is integrating follow-up questions for situations when the user’s initial input was very broad or unclear. In this way, your bot can re-prompt the user so to say, so that the user offers a better prompt in response. This will in turn make the model produce better outputs as a result. Both accuracy and performance-wise.
5. Quality over speed
And for the end just to nail this one down again, because it appears it always comes down to this. Pay special attention to the quality of your bot’s outputs, as this is one of their most prominent and definitely most noticeable issues. It was the controversy in the Italian ChatGPT temporary ban, where inaccurate outputs were meant to prove the inaccuracy of the training data.[6] Hallucinations, as an output deficiency, were and always remain one of the main concerns, also still preventing the chatbots from entering the domain of search engines.[7] And we are not going to even enter the algorithmic bias/garbage-in-garbage-out debate.[8]
The accuracy and quality of the outputs, aside from hallucinations, which remain a separate riddle, can be greatly enhanced by paying special attention to the accuracy and quality of the training data. As well as the relevance of that data. Furthermore, in case you are actively fetching data through API calls or in any other way for that matter, the data you are fetching should also be double-checked for accuracy, representativity, as well as appropriateness. Finally, you should have appropriate mechanisms in place for identifying any necessary updates or any changes necessitating an update of your data sets and, of course, some mechanisms for adequately responding to such identified events.
Quality is an ongoing concern, not a one-time box to be ticked off the checklist. All this comes at a cost, primarily timewise, making the development process slower. However, quality should always come before speed, as not everyone can afford to ‘move fast and break things’.[9] At least not, if they are trying to develop a sustainable and responsible business model.
Final thoughts
Although I generally advocate for a more thought-driven and responsible approach to innovation, it appears the industry is leaning more towards the ‘move fast and break things’ mantra, with the hamster wheel spinning ever faster. Nobody wants to lose one of the most important races of our time. However, while OpenAI is busy trying to develop AGI and lawyers are scrambling to define what an AI system is, there are seemingly much less relevant questions on the minds of a lot of start-ups and commercial actors. Questions, which, however, lose all their triviality once we consider their scale and the number of actors they affect. One such question is how do I develop and implement a chatbot while stepping in as few legal mud puddles as possible? Hopefully, this article can help some of the people, trying to do things the right way, to start in the right direction.
[1] 13 Best Open Source Chatbot Platforms to Use in 2023, botpress, 14 of July 2022, https://botpress.com/blog/open-source-chatbots.
[2] Jesse Sumrak, 8 Best Chatbot APIs to Use in 2023, twilio BLOG, 27 of December 2022, https://www.twilio.com/blog/best-chatbot-apis.
[3]Olasimbo Arigbabu, Fine-tuning OpenAI GPT-3 to build Custom Chatbot, Medium, 25 of January 2023, https://medium.com/@olahsymbo/fine-tuning-openai-gpt-3-to-build-custom-chatbot-fe2dea524561.
[4] Ali Mahdi, 8 ChatBot Alternatives: Which Tool Is Right For You?, Chatling, 11 of November 2023, https://chatling.ai/blog/chatbot-alternatives.
[5] Allen Bernard, 10 Top Chatbot Providers You Should Know About in 2023, CMS WIRE, 10 of March 2023, https://www.cmswire.com/digital-experience/10-chatbot-providers-you-should-know-about/.
[6] GPDP, Intelligenza artificiale: il Garante blocca ChatGPT. Raccolta illecita di dati personali. Assenza di sistemi per la verifica dell’età dei minori, 31 of March 2023, https://www.gpdp.it/web/guest/home/docweb/-/docweb-display/docweb/9870847#english.
[7] Will Douglas Heaven, Chatbots could one day replace search engines. Here’s why that’s a terrible idea., MIT Technology Review, 29 of March 2022, https://www.technologyreview.com/2022/03/29/1048439/chatbots-replace-search-engine-terrible-idea/.
[8] Isabelle Bousquette, Rise of AI Puts Spotlight on Bias in Algorithms, The Wall Street Journal, 9 of March 2023, https://www.wsj.com/articles/rise-of-ai-puts-spotlight-on-bias-in-algorithms-26ee6cc9; Rahul Awati, garbage in, garbage out (GIGO), TechTarget, https://www.techtarget.com/searchsoftwarequality/definition/garbage-in-garbage-out.
[9] Beatrice Nolan, Silicon Valley has a new version of its beloved ‘move fast and break things’ mantra, Business Insider, 6 of December 2023, https://www.businessinsider.com/silicon-valley-move-fast-and-break-things-sam-altman-openai-2023-12.
Chatbots Caught in the (Legal) Crossfire was originally published in Towards Data Science on Medium, where people are continuing the conversation by highlighting and responding to this story.
Originally appeared here:
Chatbots Caught in the (Legal) Crossfire
Go Here to Read this Fast! Chatbots Caught in the (Legal) Crossfire