This year, Air Canada lost a lawsuit against a customer who was misled by an AI chatbot into purchasing full-price plane tickets, being assured they would later be refunded under the company’s bereavement policy. The airline tried to claim the bot was “responsible for its own actions.” This line of argumentation was rejected by the court and the company not only had to pay compensation, it also received public criticism for attempting to distance itself from the situation. It’s clear companies are liable for AI models, even when they make mistakes beyond our control. The rapidly advancing world of AI,…
This story continues at The Next Web
Originally appeared here:
AI doesn’t hallucinate — why attributing human traits to tech is users’ biggest pitfall