The rapid evolution of Artificial Intelligence continues to reshape our digital and physical landscapes. However, as AI systems grow in complexity and scale, so too does the need for rigorous frameworks to ensure they remain fair, secure, and sustainable.
By bringing together leading researchers and industry experts from across these powerhouse consortiums and hosting institutes, the event unpacked the critical challenges and emerging solutions in building AI we can genuinely rely on. Structured around three core pillars—Frugality, Fairness, and Trust—the workshop provided a comprehensive look into the future of responsible AI.
Here are the key takeaways from the speaker sessions:
Session 1: Frugality
The opening session focused on resource-efficient AI and minimising the environmental footprint of large-scale deployments.
Victor Charpenay (Associate Professor, École des Mines de Saint-Étienne / ENFIELD) kicked off the discussion by exploring the physical and systemic footprints of technology through Life Cycle Assessment (LCA). He highlighted a critical gap in current sustainability assessments: research is heavily biased towards measuring carbon emissions and evaluating “simple” AI models, largely overlooking the exponentially higher computational costs of modern generative AI. A vital concept he introduced was Technological Symbioses—instances where two technologies reinforce one another, amplifying their mutual impact. As AI integrates with sectors like concrete manufacturing, biochar synthesis, and data centre operations, understanding these symbiotic relationships is crucial for accurately assessing the higher-order environmental impacts of AI deployment.
Enzo Tartaglione (Full Professor, Télécom Paris, Institut Polytechnique de Paris / ELIAS) presented a fresh perspective on making deep neural networks more efficient. Rather than focusing on traditional pruning — removing individual weights or neurons — he argued that what actually matters for real-world speed gains is reducing the number of layers, since GPUs process layers sequentially regardless of their width. His talk introduced layer collapse as a principled compression tool: when a layer’s outputs become near-constant across inputs, that layer can simply be removed. He walked through a series of methods his group has developed to identify and induce this phenomenon, from entropy-based detection to regularisation-driven approaches, with results demonstrated across standard architectures and, more recently, large generative models.
Session 2: Fairness
The second session addressed algorithmic bias, socio-technical alignment, and how to ensure AI systems prevent discrimination in real-world applications.
Ruta Binkyte-Sadauskiene (Researcher, CISPA / ELLIOT) emphasised the urgent need to rethink fairness as AI evolves from basic prediction models to Large Language Models (LLMs) and Agentic AI
Charlotte Laclau (Associate Professor, Télécom Paris, IP Paris / ELIAS) began by emphasising that fairness cannot be treated as a simple “plug-in” constraint that is easily transferred across different learning settings. She argued that a meaningful notion of fairness must be defined relative to three core elements: the object being predicted, the intervention point, and the data-generating system. Highlighting challenges in “Fair Link Prediction,” she noted that natural human mechanisms like homophily—the tendency to associate with similar individuals—can create systemic segregation in online networks. Consequently, mitigating these biases requires topological awareness, such as evaluating “k-hop fairness,” rather than relying on surface-level metrics. Laclau noted that the critical question for developers is no longer “Which metric is best?” but rather “Which notion matches the system and the specific harm we aim to prevent?
Session 3: Trust
The final session centred on enhancing the robustness, security, and integrity of AI, particularly in high-stakes cloud environments and human-AI interactions.
Sebastian Heil (Senior Researcher, Chemnitz University of Technology / ENFIELD) explored the crucial dimension of Human Perception of AI Trustworthiness. Drawing on a longitudinal study of UK news media spanning from 2013 to 2024, Heil illustrated that public discourse around AI is maturing. The conversation is actively shifting away from the blind celebration of technical achievements towards critical expectations of transparency and accountability. To measure how users actually build trust with AI, he detailed ongoing vignette-based survey research across critical domains. This research isolates specific system characteristics—such as the presence of active human oversight or technical fallback mechanisms—to understand exactly what drives user confidence. Quoting Kevin Kelly, Heil emphasised that trust is “earned in drops and lost in buckets,” underscoring the need for systems that demonstrably align with the core requirements of the EU’s Ethics Guidelines for Trustworthy AI.
Georgios Spathoulas (NTNU / ENFIELD) closed the technical sessions by discussing the rise of AI as a Service (AIaaS) and the resulting “black box” problem, where users cannot see or verify the models behind API-based AI systems, creating a major trust gap.
He outlined three key vulnerabilities: possible hidden model substitution by providers, lack of transparency in training data integrity, and performance drift from continuous fine-tuning.
To address these issues, Spathoulas recalled for a paradigm shift from blind trust to cryptographic verification. He detailed a robust blend of technical safeguards—such as cryptographic model provenance, digital watermarking, and Trusted Execution Environments (TEEs) like Intel SGX and ARM TrustZone—as well as organisational governance frameworks like transparency policies and independent auditing. Notably, he highlighted Zero-Knowledge Proofs (ZKPs) as a revolutionary way to verify computational integrity, allowing providers to prove a specific model was used without exposing proprietary parameters or sensitive client data.
The Path Forward
The TDW Trustworthy AI event made one thing abundantly clear: building reliable AI is a multidisciplinary challenge requiring immediate, coordinated action. The co-organisation of this event by the ELIAS, ELLIOT, and ENFIELD projects—supported by the hosting institutes in Paris—exemplifies exactly the kind of cross-institutional, collaborative effort needed to drive this field forward. From contextualising interactional fairness and cryptographically securing cloud models to measuring the psychological foundations of user trust and physical environmental impacts, the ecosystem is recognising that integrity and frugality are no longer optional. They are the fundamental prerequisites for the future of AI in Europe and beyond.
Watch the event recording here!
This TDW marked the third in a series of thematic workshops organised by ELIAS, in collaboration with the ELLIOT and ENFIELD networks.
Check out the previous editions: Theme Development Workshops
ELLIOT – European Large Open Multi-Modal Foundation Models For Robust Generalization On Arbitrary Data Streams (GA No. 101214398 ) aims to develop the next generation of open Multimodal Generalist Foundation Models (MGFMs): AI systems designed to learn general knowledge and patterns from massive amounts of data of various types — from videos, images, and text to sensor signals, industrial time series, and satellite feeds — and efficiently transfer the generic knowledge learned in generalist manner to a wide variety of downstream tasks. Unlike current foundation models that face significant challenges in terms of generalisation capabilities and support for multimodal data, ELLIOT’s models will be capable of robust generalisation across conditions not seen during the training, coping well with dynamic, noisy, and temporally-evolving multimodal data streams. Real and synthetic data will be leveraged for training MGFMs and for further adapting them for specific downstream tasks in domains like media, earth observation, robot perception, mobility, computer engineering and workflow automation. European HPC infrastructure is directly included in the consortium to ensure the availability of the necessary computing resources. www.elliot-ai.eu
ENFIELD – European Lighthouse to Manifest Trustworthy and Green AI (GA No. 101120657) aims to advance adaptive, green, human-centric and trustworthy AI by establishing a European Centre of Excellence. With a consortium of 30 partners from 18 countries—covering academia, industry, SMEs and the public sector—the project targets key domains such as healthcare, energy, manufacturing and space. ENFIELD will deliver over 75 AI solutions, around 180 high-impact publications, and strategic roadmaps, supported by extensive outreach to foster responsible AI adoption across Europe. www.enfield-project.eu
