6th Data and Artificial Intelligence Symposium (DAISY) Causal and Generative AI for Decision Making in Healthcare, Public Health and Policy
June 30, 2026 at the ACM Conference on Bioinformatics, Computational Biology and Health Informatics at the University of Calabria in Rende, Italy
ABOUT DAISY
Abstract
Generative AI is rapidly entering high‑stakes decision contexts across healthcare and public health, yet most systems are optimized to predict or generate text — not to produce counterfactually valid, uncertainty‑aware, and accountable recommendations for interventions. The “6th Data and Artificial Intelligence Symposium (DAISY): Causal & Generative AI for Decision Making in Healthcare, Public Health, and Policy” brings together researchers in artificial intelligence, machine learning, causal inference, biomedical informatics, biostatistics, epidemiology, psychometrics, and health policy to advance methods for causally constrained generation, counterfactual evaluation, and principled uncertainty and safety in policy design. Building on five prior editions, the 2026 workshop emphasizes foundational methodology that translates to reliable, equitable and auditable decision support. This half‑day, hybrid workshop features a keynote, a moderated panel on causal reliability of decision‑policy design, and lightning talks selected from peer‑reviewed submissions. Discussion will span practical strategies to encode causal structure in large language models and diffusion models (e.g., directed acyclic graph embedding, invariance constraints), evaluation under unobserved counterfactuals (e.g., off‑policy estimators, policy regret, decision calibration), handling epistemic/aleatoric uncertainty and safety cases, governance and auditability (intervention‑aware guardrails, traceability), transportability across sites and populations, and pathways for clinical and public‑health integration (trial designs, monitoring, rollback). DAISY aims to catalyze a community to develop causally reliable generative decision support models for real‑world health and policy impact.
Motivation and rationale
Generative artificial intelligence (AI) — including large language models (LLMs), diffusion models, program guided generators — is rapidly entering high stakes decision contexts across healthcare, public health, and policy. Yet today’s systems are typically trained to predict or generate, not to produce counterfactually valid, uncertainty aware, and accountable recommendations for interventions. This gap risks unreliable or inequitable outcomes when models are deployed in practice.
Focus
DAISY will convene AI, machine learning (ML), causal inference, biomedical informatics, biostatistics, epidemiology, psychometrics, and health policy researchers to highlight AI approaches for causally constrained generation, counterfactual evaluation, and principled uncertainty and safety.
Past DAISY experiences
The workshop extends five prior DAISY editions. DAISY is an established international, interdisciplinary AI forum created by the University of Florida, whose main theme evolves every year based on top AI trends and challenges. Over the past editions, DAISY covered personalized AI, bias in AI, governance in AI, across medicine, public health and business perspectives. Participants from all over the world — academics, public officials and professionals — gathered, presented and exchanged perspectives. For example, in 2023, DAISY focused on AI governance, with a goal of developing interdisciplinary theories to help guide for-profit and non-profit organizations, with sponsorship from the U.S. National Science Foundation. In 2024, the main theme was prescriptive AI, going beyond predictive AI and looking at mechanistic discoveries from large healthcare data, e.g., for drug repurposing, behavioral changes, etc., gaining attention through NVIDIA support. This was the largest edition surpassing 100 in person attendees. In 2025, DAISY showcased the intersection of academia and industry in advancing AI through joint research, commercialization and workforce development, and collaborated with Merck.
A global collaboration
DAISY will be held as a workshop for participants of the 17th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics ACM BCB), the flagship conference of ACM Special Interest Group on Bioinformatics, Computational Biology, and Biomedical Informatics (SIGBio). DAISY is squarely aligned with ACM BCB’s AI & Data Science and Clinical & Healthcare Informatics categories, e.g., Public Health and Population Health Informatics, as well as AI & Data Science categories, e.g., AI and Machine Learning in Medicine and Healthcare, Large Language Models (LLMs) for Healthcare Applications, and addresses the conference goal to showcase methods that transform data into actionable knowledge for biomedical discovery and healthcare delivery. Importantly, a prior edition of DAISY focused on AI governance has been already featured by ACM SIGBio in 2023.
workshop leadership
Organizing committee
Department:
College of Public Health and Health Professions Dean's Office
Mattia Prosperi Ph.D., FAMIA, FACMI
Professor and Associate Dean for AI and Innovation; EPI Chief Research Information Officer
Phone:
(352) 273-5860
Email:
m.prosperi@ufl.eduPhone:
(352) 273-6073
Email:
noah.hammarlund@ufl.eduDepartment:
MD-HOBI-GENERAL
Yi Guo PhD, FAMIA
Associate Professor & Associate Chair for Data Science; Division Chief, Biomedical Informatics and Data Science
Phone:
(352) 294-5969
Email:
yiguo@ufl.eduProgram committee
schedule and topics
Keynote speaker

Takis Benos, Ph.D.
William Bushnell Presidential Chaired Professor
Department of Epidemiology
University of Florida College of Public Health and Health Professions
Takis Benos has a background in applied mathematics and molecular biology. He received postdoctoral training in genomics with Michael Ashburner (EMBL-EBI, Cambridge, U.K.) and computational biology with Gary Stormo (Washington University in St. Louis, MO) before he became faculty at the University of Pittsburgh in 2002. In June 2022, he was recruited by the Department of Epidemiology at the University of Florida.
Dr. Benos’ group develops machine learning algorithms to perform integrative analyses of multi-modal data, including many types of -omics, clinical and medical imaging data. He has long experience in applying causal graph learning methods to molecular, tissue and patient data to identify factors affecting disease progression and mortality and model disease trajectories and subtypes. He is particularly interested in chronic diseases (mainly, lung, heart, kidney), cancer and aging. He has >150 peer-reviewed publications in journals such as Nature, Science, PNAS, Nature Communications and PLoS Medicine. He has been continuously funded by NSF and NIH since 2003.
Dr. Benos is also devoted to training young students. He has co-founded two Ph.D. programs and served as Director or Associate Director of them. He has mentored 23 graduate students, 12 postdocs and >40 undergraduates, many of which are currently faculty in universities, research institutions or leading research groups in pharmaceutical companies.
Agenda
DAISY will be in person, over half a day, with hybrid support to broaden participation. The workshop will feature a keynote talk, a panel discussion and lightning talks from contributing participants, structured as follows:
- Opening by the organizers (5m)
- Keynote speech (45m) with Q&A (10m)
- Coffee break (30m)
- Panel discussion (30m) [PANELISTS TBA]
- Lightning talks (3x15m) [TBA]
The panel discussion will be centered on causal reliability of decision-policy design. The session will be moderated, include a set of prearranged questions, and time set aside for questions coming from the audience.
Illustrative themes and questions include:
- Theme 1 Foundations and validity
- What does “causal reliability” mean for policy generating models in practice (e.g., vaccination outreach, care gap closure), and how would you evidence it to a non-technical decision maker?
- Which causal structures are most practical to impose on LLMs/diffusion today — structural causal model- or directed acyclic graph priors, invariance constraints, instrument variable signals — and where do they fail in real data?
- Theme 2 Evaluation under counterfactuals
- For first deployments, when would you choose A/B testing, stepped wedge, or an adaptive platform trial for policy generating systems in hospitals or health departments?
- What is the minimum viable benchmark for causal generative AI (datasets, tasks, metrics), and under what diagnostics is synthetic data acceptable for evaluation?
- Theme 3 Clinical and public health integration
- Where should causal generative AI live in the workflow — advisory copilot, policy generator with human approval, or auto execution with overrides — and who is accountable for the final decision? What constitutes a pre deployment “safety case” (counterfactual stress tests, guardrails tied to SCM constraints, fail safes), and how should uncertainty be communicated to clinicians/policymakers?
- If the learned policy underperforms post launch, what’s the plan for monitoring, rollback and updating without creating operational whiplash or equity harms?
The lightning talks will be selected by authors’ submissions to DAISY.
Call for contributions
DAISY will accept paper/poster submissions compliant to the ACM BCB guideline, using the ACM Master Article Template, which will be peer-reviewed by the DAISY program committee, and possibly selected for oral “lightning talk” during the workshop, or as poster. Upon request by the authors, the DAISY committee can forward an accepted contribution to the main ACM BCB committee for consideration to be included in the main conference proceedings (to be published in the digital ACM Library).
Abstract topics include:
- Causally constrained generation for interventions (e.g., structural causal model- or directed acyclic graph‑guided large language models, intervention‑aware decoding).
- Counterfactual evaluation and off‑policy methods (policy regret, decision calibration) for policy‑generating models.
- Uncertainty quantification and safety (epistemic/aleatoric uncertainty, risk bounds, causal stress testing).
- Governance, auditability and guardrails at the model/policy level (traceability, policy cards, accountability).
- Transportability and robustness across sites/populations (domain shift, invariances, causal transport).
- Fairness and equity for counterfactual recommendations (bias in graphs, group‑specific regret, harm bounds).
- Multimodal and longitudinal data integration (electronic health records, claims, social determinants of health, imaging, wearables) for causal generative AI.
- Benchmarks, synthetic data, and simulation testbeds for causal generative AI evaluation (design, diagnostics, validity).
- Human‑in‑the‑loop deployment and evidence generation (A/B, stepped‑wedge, adaptive platforms; monitoring/rollback).
- Applications in practice: clinical decision support, care pathways, drug repurposing, population health interventions and resource allocation.
SUBMIT A WORKSHOP PAPER OR POSTER ABSTRACT
DAISY 2026: Call for Workshop Papers
DAISY invites original submissions not published, nor currently under review elsewhere. Authors can submit: (1) regular papers of 8-10 pages, including references, formatted in the double-column ACM SIG conference format; (2) one-page abstract describing their posters. Manuscripts should comply with the requirements delineated on the ACM BCB website (including usage of the ACM Master Article Template). Accepted submission will not be automatically published in the ACM BCB proceedings or ACM BCB featured journals, and the authors should contact the DAISY committee right upon acceptance if they wish to pass their submission on to the main conference committee for consideration of publication.
Attendance and Registration
Participation to DAISY 2026 requires registration to the main conference, ACM-BCB 2026. For accepted papers/abstracts, at least one author must register to the conference.
additional information
Important dates
- Call for contributions opens on March 5, 2026
- Deadline for submissions is Apr 15, 2026
- Authors’ notifications will be sent by May 15, 2026
Contact
Dr. Noah Hammarlund, noah.hammarlund@phhp.ufl.edu