BEGIN:VCALENDAR
VERSION:2.0
PRODID:-//The National Consortium for Data Science - ECPv6.5.0.1//NONSGML v1.0//EN
CALSCALE:GREGORIAN
METHOD:PUBLISH
X-WR-CALNAME:The National Consortium for Data Science
X-ORIGINAL-URL:https://datascienceconsortium.org
X-WR-CALDESC:Events for The National Consortium for Data Science
REFRESH-INTERVAL;VALUE=DURATION:PT1H
X-Robots-Tag:noindex
X-PUBLISHED-TTL:PT1H
BEGIN:VTIMEZONE
TZID:America/New_York
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20230312T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20231105T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20240310T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20241103T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20250309T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20251102T060000
END:STANDARD
BEGIN:DAYLIGHT
TZOFFSETFROM:-0500
TZOFFSETTO:-0400
TZNAME:EDT
DTSTART:20260308T070000
END:DAYLIGHT
BEGIN:STANDARD
TZOFFSETFROM:-0400
TZOFFSETTO:-0500
TZNAME:EST
DTSTART:20261101T060000
END:STANDARD
END:VTIMEZONE
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260422T120000
DTEND;TZID=America/New_York:20260422T130000
DTSTAMP:20260429T182605
CREATED:20260409T142900Z
LAST-MODIFIED:20260409T144112Z
UID:10000103-1776859200-1776862800@datascienceconsortium.org
SUMMARY:DataBytes: An Agentic Operating System for Toxicology
DESCRIPTION:Chemical safety assessment today is slow\, costly\, and fragmented—held back by siloed data streams and workflows that haven’t kept pace with scientific complexity. While many in the field focus on automating report generation\, this webinar introduces a different paradigm: an agentic operating system where AI agents collaborate with toxicologists rather than replace them. \nWhat does “agentic” truly mean beyond the buzz? Through live demonstrations\, Thomas Luechtefeld\, PhD\, CEO and Founder of Insilica\, will show how autonomous systems can reason across heterogeneous scientific evidence\, surface insights before they’re requested\, and accumulate institutional knowledge over time. He’ll explore the architectural and epistemic shift from treating AI as a prediction tool to designing AI as infrastructure—an approach that becomes essential in domains where accuracy directly impacts regulatory decisions and public health. \nAttendees will learn: \n\nWhy autonomous agents require fundamentally different architecture than traditional machine‑learning pipelines\nHow AI systems can retain and build on organizational knowledge instead of starting from scratch\nWhat “production‑ready” looks like when outputs must withstand scientific and regulatory scrutiny\nA live demonstration of an AI agent reasoning through a toxicology question in real time\nThe real limitations of current agent architectures—and where the technology still falls short\n\nRegister for the Event
URL:https://datascienceconsortium.org/event/databytes-apr-2026/
LOCATION:via Zoom
CATEGORIES:DataBytes
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20260218T120000
DTEND;TZID=America/New_York:20260218T130000
DTSTAMP:20260429T182605
CREATED:20260212T211246Z
LAST-MODIFIED:20260213T170102Z
UID:10000101-1771416000-1771419600@datascienceconsortium.org
SUMMARY:DataBytes: Breaking the Georeferencing Bottleneck - How AI is Transforming Natural History Collections
DESCRIPTION:Large language models (LLMs) offer a transformative solution to one of the most persistent challenges facing natural history collections: converting textual locality data from specimen labels into precise geographical coordinates. Traditional georeferencing methods require substantial expertise\, time\, and financial resources\, constraints that have left millions of specimens in museums and herbaria without spatial data. Our standardized testing demonstrates that contemporary LLMs can achieve near-human accuracy in georeferencing tasks while dramatically reducing both processing time and costs. By integrating LLMs into existing digitization workflows\, institutions can accelerate the spatial enablement of their collections\, unlocking new research opportunities in biodiversity science\, climate change studies\, and conservation planning. This approach has immediate practical applications for collection managers while advancing the broader goal of making natural history data more accessible and analytically powerful for the research community. \n  \nWhat to expect: \n\nPerformance benchmarks and validation: Understand how LLM accuracy compares to traditional georeferencing methods across different specimen types\, locality descriptions\, and geographic regions\, with concrete metrics from standardized testing.\nScalability and resource optimization: Discover how LLMs can dramatically reduce georeferencing bottlenecks\, enabling collections to process thousands of specimens at a fraction of the traditional cost and time investment.\nUnlocking collection value: See how accelerated georeferencing opens new avenues for research applications\, data mobilization initiatives\, and cross-institutional data sharing that were previously limited by spatial data gaps.\n\nRegister for the Event
URL:https://datascienceconsortium.org/event/databytes-feb-2026/
LOCATION:via Zoom
CATEGORIES:DataBytes
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251210T120000
DTEND;TZID=America/New_York:20251210T130000
DTSTAMP:20260429T182605
CREATED:20251015T165314Z
LAST-MODIFIED:20251203T162836Z
UID:10000100-1765368000-1765371600@datascienceconsortium.org
SUMMARY:DataBytes: Calling all Data Scientists...Introducing ROBOKOP
DESCRIPTION:ROBOKOP (Reasoning Over Biomedical Objects linked in Knowledge Oriented Pathways) is a knowledge graph system that aims to accelerate and advance scientific discovery by enabling users to simultaneously explore dozens of integrated and harmonized sources of biomedical knowledge. ROBOKOP has been applied to suggest mechanistic insights into biomedical questions and hypotheses for subsequent testing across multiple domains\, including hepatotoxicity\, environmental determinants of disease\, drug repurposing and mechanism of action\, and others. \nDuring this NCDS webinar\, Dr. Karamarie Fecho will provide a short presentation and live demonstration of ROBOKOP\, followed by an opportunity for attendees to pose questions to ROBOKOP and explore answers. \nAfter this webinar\, attendees will:\n● Gain a general understanding of knowledge graphs\n● Be aware of the ROBOKOP knowledge graph system and know how to access it\n● Have a basic understanding of how to query ROBOKOP and interpret results \nFunding\nROBOKOP is funded by NIEHS with joint support from the NIH Office of Data Science (#U24ES035214). Drs. Alexander Tropsha and Chris Bizon\, of the University of North Carolina at Chapel Hill\, serve as Principal Investigators. \nAbout Dr. Karamarie Fecho\nDr. Fecho holds a PhD degree in Neurobiology from UNC’s School of Medicine. Her scientific background is broad and spans the clinical and translational spectrum\, from basic science to clinical research to healthcare quality assurance/improvement and data science. \nKara currently serves as Founder and CEO of Copperline Professional Solutions\, which is a small biomedical consulting company that she founded in 2010. Through Copperline\, Kara has engaged with numerous academic organizations\, pharmaceutical companies\, tech start-ups\, and non-profits\, providing a wide array of services\, products\, and expertise. \nKara also is a Research Affiliate at RENCI\, where in recent years she has been focused on the development and application of biomedical knowledge graphs. This webinar will focus on her work on the ROBOKOP (Reasoning Over Biomedical Objects linked in Knowledge Oriented Pathways) knowledge graph system. \nREGISTER HERE
URL:https://datascienceconsortium.org/event/databytes-dec-2025/
LOCATION:via Zoom
CATEGORIES:DataBytes,Professional Development,Upskilling
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20251030T110000
DTEND;TZID=America/New_York:20251030T120000
DTSTAMP:20260429T182605
CREATED:20251015T155907Z
LAST-MODIFIED:20251015T170701Z
UID:10000097-1761822000-1761825600@datascienceconsortium.org
SUMMARY:DataBytes: Real-World AI Evaluation: Overcoming the AI Assurance Bottleneck
DESCRIPTION:Current AI evaluation methods focus solely on the underlying technology and lack the necessary detail for real-world deployment decisions. Civitaas aims to bridge this gap and accelerate AI market readiness by evaluating the quality\, safety\, and utility of AI technologies in the real world. We achieve this through real-time user feedback as they interact with AI systems\, leading to more meaningful outcomes that enable businesses to leverage AI effectively and responsibly. \nRegister for the Event
URL:https://datascienceconsortium.org/event/databytes-oct-2025/
LOCATION:via Zoom
CATEGORIES:DataBytes,Professional Development,Upskilling
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20250424T120000
DTEND;TZID=America/New_York:20250424T130000
DTSTAMP:20260429T182605
CREATED:20241127T203412Z
LAST-MODIFIED:20250318T152904Z
UID:10000084-1745496000-1745499600@datascienceconsortium.org
SUMMARY:DataBytes: Do Large Language Models Have a Legal Duty to Tell the Truth?
DESCRIPTION:Large language models (LLMs) do not distinguish between fact and fiction. They will return an answer to almost any prompt\, yet factually incorrect responses are commonplace. Our tendency to anthropomorphise machines and trust models as human-like truth tellers — consuming and spreading the bad information that they produce in the process — is uniquely worrying. They are not\, strictly speaking\, designed to tell the truth. \nYet they are implemented in many sectors where truth and detail matter such as education\, science\, health\, the media\, law\, and finance. Our guest presenter Sandra Wachter coined the idea of “careless speech” as a new type of harm created by large language models (LLM) that poses cumulative\, long-term risks to science\, education\, and shared social truth in democratic societies. These subtle mistruths are poised to cumulatively degrade and homogenize knowledge over time. \nThis begs the question: Do large language models have a legal duty to tell the truth? \nJoin us as Sandra shows the prevalence of hallucinations\, and we assess the existence of truth-related obligations in EU human rights law and the Artificial Intelligence Act\, Digital Services Act\, Product Liability Directive and Artificial Intelligence Liability Directive. We will close by proposing ideas of how to reduce hallucinations in LLMs and a robust Q & A opportunity. \nRegister for the Event
URL:https://datascienceconsortium.org/event/databytes-april-2025/
CATEGORIES:DataBytes
ATTACH;FMTTYPE=image/png:https://datascienceconsortium.org/wp-content/uploads/NCDS_Flyer_2025_04_DataBytes_Small-Flyer.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20241212T120000
DTEND;TZID=America/New_York:20241212T130000
DTSTAMP:20260429T182605
CREATED:20241127T170023Z
LAST-MODIFIED:20241204T170748Z
UID:10000082-1734004800-1734008400@datascienceconsortium.org
SUMMARY:DataBytes: Intro to NLP\, LLMs and more - a Taste of Foundations of AI
DESCRIPTION:In this short session you’ll learn the basics of Natural Language Processing\, starting with turning words and sentences into embeddings for use in deep learning tasks\, as well as an intro to language models\, large and otherwise. You’ll learn some hands-on tips and tricks for making sense of language data\, and get a technical but beginner-friendly introduction to the wide world of AI with language. \nRegister for the Event
URL:https://datascienceconsortium.org/event/databytes-dec-2024/
CATEGORIES:DataBytes,Professional Development,Upskilling
ATTACH;FMTTYPE=image/png:https://datascienceconsortium.org/wp-content/uploads/NCDS_Flyer_2024_12_DataBytes_Small-Flyer-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240430T153000
DTEND;TZID=America/New_York:20240430T161500
DTSTAMP:20260429T182605
CREATED:20240222T165513Z
LAST-MODIFIED:20240222T170615Z
UID:10000070-1714491000-1714493700@datascienceconsortium.org
SUMMARY:DataBytes: Supporting AI Risk Management in the Analytics Lifecycle
DESCRIPTION:The National Institute of Standards and Technology (NIST) released an AI Risk Management Framework for trustworthy and responsible use of AI and analytics. NIST offers a portfolio of measurements\, standards\, and legal metrology to provide recommendations that ensure traceability\, enable quality assurance\, and harmonize documentary standards and regulatory practices. Their framework is very detailed with recommendations across four functions: govern\, map\, measure\, and manage. In this session\, we’ll discuss incorporating these recommendations into the analytics lifecycle. Attendees to this session will gain a greater understanding of trustworthy AI best practices as well as user roles and expectations for building responsible analytics. \nJoin NCDS as Sophia Rowland\, a Senior Product Manager focusing on ModelOps and MLOps at SAS\, walks us through this important presentation. \nRegister for the Event \n\n 
URL:https://datascienceconsortium.org/event/databytes-supporting-ai-risk-management-in-the-analytics-lifecycle/
LOCATION:via Zoom
CATEGORIES:DataBytes,Professional Development,Upskilling
ATTACH;FMTTYPE=image/png:https://datascienceconsortium.org/wp-content/uploads/NCDS_Flyer_2024_04_DataBytes_Small-Flyer.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20240222T120000
DTEND;TZID=America/New_York:20240222T130000
DTSTAMP:20260429T182605
CREATED:20240202T162917Z
LAST-MODIFIED:20240213T163833Z
UID:10000068-1708603200-1708606800@datascienceconsortium.org
SUMMARY:DataBytes: Becoming A Data Detective - Holding AI Accountable
DESCRIPTION:Bias and brittleness in artificial intelligence (Al) tools are a growing concern. Join Hilke Schellman\, Emmy-award winning investigative reporter\, Wall Street Journal and Guardian contributor and Journalism Professor at NYU\, as she shares key takeaways from her book\, The Algorithm: How Al Decides Who Gets Hired\, Monitored\, Promoted\, and Fired and Why We Need to Fight Back Now. \nAl is now being used to decide who has access to an education\, who gets hired\, who gets fired\, and who receives a promotion. Algorithms are on the brink of dominating our lives and threaten our human future-if we don’t fight back. During the webinar\, Schellmann will share takeaways about the rise of Al in the world of work and show how she tested many of the available tools herself without coding experience. \nDuring our time together\, Hilke will share a few key takeaways from the book and answer questions from the audience. You don’t want to miss this.\nRegister for the Event
URL:https://datascienceconsortium.org/event/databytes-feb-2024/
LOCATION:via Zoom
CATEGORIES:DataBytes,Professional Development,Upskilling
ATTACH;FMTTYPE=image/png:https://datascienceconsortium.org/wp-content/uploads/NCDS_Flyer_2024_02_DataBytes_Small-Flyer-1.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20231011T153000
DTEND;TZID=America/New_York:20231011T163000
DTSTAMP:20260429T182605
CREATED:20231003T133202Z
LAST-MODIFIED:20231003T205542Z
UID:10000066-1697038200-1697041800@datascienceconsortium.org
SUMMARY:DataBytes: The Risks of Facial Recognition Technology: Dismantling the First Amendment Defense
DESCRIPTION:In a lawsuit challenging its surveillance activities\, Clearview AI used the First Amendment as a defense. The facial recognition technology company argued that the creation and use of its surveillance product was First Amendment protected speech. Join Talya Whyte\, third-year law student at New York University\, as she presents a case study on the parties’ basic arguments\, Clearview AI’s characterization of its activities as “speech\,” and the implications of this argument. Attendees will understand how facial recognition technology works and the risks and harms inherent in its building and implementation\, and gain the knowledge to make more informed legal\, policy\, and technical choices about the implementation of AI-based surveillance technology. \nTalya Whyte is a third year law student at New York University. Her research interests lie at the intersection of new technology\, society\, public trust\, and digital rights. She is a 2023 Google Legal Scholar\, a Student Fellow at the Engelberg Center on Innovation Law & Policy\, and NYU Cyber Scholar. Whyte hopes for a thoughtful and humanitarian integration of technology into existing legal and societal frameworks. \nRegister for the Event
URL:https://datascienceconsortium.org/event/databytes-oct-2023/
CATEGORIES:DataBytes
ATTACH;FMTTYPE=image/png:https://datascienceconsortium.org/wp-content/uploads/NCDS_Flyer_2023_10_DataBytes_Small-Flyer.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230620T160000
DTEND;TZID=America/New_York:20230620T170000
DTSTAMP:20260429T182605
CREATED:20230421T151948Z
LAST-MODIFIED:20230922T192118Z
UID:10000060-1687276800-1687280400@datascienceconsortium.org
SUMMARY:DataBytes: AI Ethics Through the Lens of Causality:  A Theory of Fairness
DESCRIPTION:The National Consortium for Data Science looks forward to hosting Christopher Lam\, CEO of Epistamai on June 20 for our next DataBytes event as he discusses AI Ethics Through the Lens of Causality: A Theory of Fairness. \nView a recording of the event here. \nTo understand fairness\, one must unify central ideas from the social sciences and humanities to mathematics and computer science. Join Christopher Lam\, CEO of Epistamai\, as he shows how to model a principal cause of algorithmic bias and directly map it to the two fundamental laws of causal inference. Additionally\, he will show how to bridge the field of causal inference to machine learning\, providing us with a novel way to visualize the different ways that a supervised machine learning model can discriminate. These causal models may help policymakers on both sides of the aisle to modernize AI regulations so that they are aligned to society’s values.
URL:https://datascienceconsortium.org/event/databytes-june-2023/
CATEGORIES:DataBytes
ATTACH;FMTTYPE=image/png:https://datascienceconsortium.org/wp-content/uploads/NCDS_Flyer_2023_06_DataBytes_Small-Flyer.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230420T110000
DTEND;TZID=America/New_York:20230420T120000
DTSTAMP:20260429T182605
CREATED:20230308T220141Z
LAST-MODIFIED:20230505T163955Z
UID:10000055-1681988400-1681992000@datascienceconsortium.org
SUMMARY:DataBytes: Data Ethics
DESCRIPTION:The National Consortium for Data Science looks forward to hosting Anisha Nadkarni\, Data Ethics Officer and Lead Data Analyst at Randstad\, on April 20 for our next DataBytes event as she walks us through the practical challenges she faces in work and how she addresses these issues. \n  \n\nData ethics is a growing concern in all industries\, especially as issues such as algorithmic bias\, informed consent\, and privacy become more nuanced. Additionally\, with artificial intelligence and machine learning tools gaining traction at a rapid speed\, it is more imperative than ever that organizations establish strong ethical guidelines around the data collected from client projects\, research endeavors\, and business affairs. Join Anisha Nadkarni\, Data Ethics Officer at Randstad Global\, on Thursday\, Apr. 20\, from 11 a.m.-12 p.m. ET\, as she walks us through a day in the life of a data ethicist. We’ll hold a Q&A session with Nadkarni at the end of the meeting.
URL:https://datascienceconsortium.org/event/databytes-data-ethics/
CATEGORIES:DataBytes
ATTACH;FMTTYPE=image/png:https://datascienceconsortium.org/wp-content/uploads/NCDS_Flyer_2023_04_DataBytes-for-web-02.png
END:VEVENT
BEGIN:VEVENT
DTSTART;TZID=America/New_York:20230214T120000
DTEND;TZID=America/New_York:20230214T130000
DTSTAMP:20260429T182605
CREATED:20230209T175501Z
LAST-MODIFIED:20230505T045157Z
UID:10000047-1676376000-1676379600@datascienceconsortium.org
SUMMARY:DataBytes: Five ways that data visualizations can mislead (and how to fix them)
DESCRIPTION:Visualizations allow people to readily analyze and communicate data. However\, many common visualization designs lead to engaging imagery but false conclusions. By understanding what people see when they look at a visualization\, we can design visualizations that support more accurate data analysis and avoid unnecessary biases. \nJoin UNC Computer Science Assistant Professor Danielle Szafir on Tuesday\, Feb. 14\, from 12-1 p.m. ET\, as she walks us through best practices in data visualization and analysis. We’ll hold a Q&A session with Dr. Szafir at the end of the meeting. \nView a recording of the event here. \n  \n \nIf you would like to review content from the previous webinars in the series\, please click here to watch the recordings.
URL:https://datascienceconsortium.org/event/databytes-five-ways-that-data-visualizations-can-mislead-and-how-to-fix-them/
CATEGORIES:DataBytes,Professional Development
ATTACH;FMTTYPE=image/png:https://datascienceconsortium.org/wp-content/uploads/NCDS_Flyer_2023_02_DataBytes.png
END:VEVENT
END:VCALENDAR