BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//The National Consortium for Data Science - ECPv6.5.0.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-ORIGINAL-URL:https://datascienceconsortium.org
X-WR-CALDESC:Events for The National Consortium for Data Science
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250424T120000
DTEND;TZID=America/New_York:20250424T130000
DTSTAMP:20260430T084158
CREATED:20241127T203412Z
LAST-MODIFIED:20250318T152904Z
UID:10000084-1745496000-1745499600@datascienceconsortium.org
SUMMARY:DataBytes: Do Large Language Models Have a Legal Duty to Tell the Truth?
DESCRIPTION:Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt\, yet factually incorrect responses are commonplace. Our tendency to anthropomorphise machines and trust models as human-like truth tellers — consuming and spreading the bad information that they produce in the process — is uniquely worrying. They are not\, strictly speaking\, designed to tell the truth. \nYet they are implemented in many sectors where truth and detail matter such as education\, science\, health\, the media\, law\, and finance. Our guest presenter Sandra Wachter coined the idea of “careless speech” as a new type of harm created by large language models (LLM) that poses cumulative\, long-term risks to science\, education\, and shared social truth in democratic societies. These subtle mistruths are poised to cumulatively degrade and homogenize knowledge over time. \nThis begs the question: Do large language models have a legal duty to tell the truth? \nJoin us as Sandra shows the prevalence of hallucinations\, and we assess the existence of truth-related obligations in EU human rights law and the Artificial Intelligence Act\, Digital Services Act\, Product Liability Directive and Artificial Intelligence Liability Directive. We will close by proposing ideas of how to reduce hallucinations in LLMs and a robust Q & A opportunity. \nRegister for the Event
URL:https://datascienceconsortium.org/event/databytes-april-2025/
CATEGORIES:DataBytes
ATTACH;FMTTYPE=image/png:https://datascienceconsortium.org/wp-content/uploads/NCDS_Flyer_2025_04_DataBytes_Small-Flyer.png
END:VEVENT
END:VCALENDAR