DataBytes: Large Language Models: Careless Speech vs. the Duty to Tell the Truth
February 27, 2025 @ 12:00 PM EST - 1:00 PM EST
Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt, yet factually incorrect responses are commonplace. Our tendency to anthropomorphise machines and trust models as human-like truth tellers — consuming and spreading the bad information that they produce in the process — is uniquely worrying. They are not, strictly speaking, designed to tell the truth.
Yet they are implemented in many sectors where truth and detail matter such as education, science, health, the media, law, and finance. Our guest presenter Sandra Wachter coined the idea of “careless speech” as a new type of harm created by large language models (LLM) that poses cumulative, long-term risks to science, education, and shared social truth in democratic societies. These subtle mistruths are poised to cumulatively degrade and homogenize knowledge over time.
This begs the question: Do large language models have a legal duty to tell the truth?
Join us as Sandra shows the prevalence of hallucinations, and we assess the existence of truth-related obligations in EU human rights law and the Artificial Intelligence Act, Digital Services Act, Product Liability Directive and Artificial Intelligence Liability Directive. We will close by proposing ideas of how to reduce hallucinations in LLMs and a robust Q & A opportunity.