Applied AI & Real-World Evaluation
Developing responsible, interpretable AI systems grounded in real-world use.
Systemic Problem
AI is increasingly deployed in health systems without clear governance, interpretability, or accountability. Most AI systems optimize technical performance metrics, not institutional responsibility or patient outcomes. The gap between laboratory performance and real-world effectiveness remains poorly addressed.
Our Approach
We treat AI as a socio-technical system embedded in institutions, not as a standalone technology. We design for interpretability, robustness, and institutional fit from the start.
What We Build
Public-interest digital twins, federated learning infrastructures, real-world evaluation frameworks, and algorithmic accountability tools.
Related Initiatives
BeeMyBlood
Access to safe, compatible blood products is not only a clinical issue; it is an infrastructural and governance problem.
SwissNeuroRehab
Neurorehabilitation pathways are fragmented across institutions, cantons, and disciplines.
ASCertain
Many populations remain statistically invisible.
HELIOS Network
Hemoglobinopathies research remains fragmented across disciplines, countries, and data systems.
WAI Think Tank
AI governance is dominated by technical and corporate framings that often exclude critical perspectives and participatory approaches.
Stakeholders
Hospitals, regulators, researchers, technology partners.
How We Evaluate
We evaluate interpretability, robustness across populations, institutional fit, and long-term effects on care quality and equity.
Collaboration
We collaborate with actors willing to govern AI, not just deploy it.