Protected: Call for Applications: Join the ELIAS Virtual Centre
Password Protected
To view this protected post, enter the password below:
To view this protected post, enter the password below:
In early December, EurIPS 2025 brought Europe’s AI community together in Copenhagen for a week of intense discussion, exchange, and reflection on the future of artificial intelligence. Against a backdrop of rapid technological acceleration and growing societal concern, ELIAS took part in the conference with a dual presence: engaging with the ecosystem at the Start-Up Village and supporting the organisation of the “Rethinking AI — Efficiency, Frugality, and Sustainability” workshop.
Together, these two strands captured a central tension shaping today’s AI landscape: how to foster innovation and opportunity, while also confronting the environmental, social, and cultural consequences of AI at scale.
From 3–5 December, ELIAS and the ELIAS Alliance hosted a dedicated booth at the EurIPS Start-Up Village, joined by Tristan Ricken from the Hasso Plattner Institute and Aygun Garayeva from the Fondazione Bruno Kessler. Rather than focusing on a single product, the booth emphasized presence, conversation, and connection.
The ELIAS team used the opportunity to introduce the ELIAS Startup Opportunities Platform, a practical bridge between cutting-edge research and entrepreneurial pathways. Conversations ranged from early-stage ideas and research translation to broader questions about how Europe can support responsible AI innovation.
In the fast-paced environment of the Start-Up Village, ELIAS’s presence was less about pitching and more about listening: understanding the needs of startups, the aspirations of young researchers, and the challenges of turning AI research into real-world impact.
If the Start-Up Village highlighted momentum and opportunity, the Rethinking AI workshop, held on 6 December at the University of Copenhagen, offered a space to pause — and ask harder questions.
As AI systems grow in complexity and scale, their environmental and societal impacts are impossible to ignore. The workshop, co-organized by Quentin Bouniot (TUM / Helmholtz Munich), Florence d’Alché-Buc (Télécom Paris), Enzo Tartaglione (Télécom Paris), and Zeynep Akata (TUM / Helmholtz Munich), was built around two complementary pillars:
Sustainability in AI — reducing the ecological footprint of machine learning research and deployment
AI for Sustainability — using AI to address urgent environmental and climate challenges
Speakers at the workshop included Loïc Lannelongue (Cambridge Sustainable Computing Lab), Claire Monteleoni (INRIA), Bernd Ensing and Jan-Willem van de Meent (University of Amsterdam), and Sina Samangooei (https://www.cusp.ai/). Over the course of the day, discussions explored the multifaceted challenge of AI and sustainability, blending technical insight with ethical reflection and practical considerations. Several key themes emerged, offering a comprehensive view of both the promise and the responsibility inherent in AI research.
Efficiency is not enough.
A recurring insight was that energy efficiency alone cannot make AI sustainable. As models, algorithms, and data centers become more sophisticated and energy-conscious, overall demand often grows faster than any individual savings. This phenomenon, known as the rebound effect, was highlighted repeatedly. Participants questioned whether building smaller, faster models genuinely reduces environmental impact, or simply redistributes it across devices, applications, and geographies. The conversation underscored a critical point: sustainability is not purely a matter of technical optimisation; it also requires cultural and behavioural change within the research community.
Rethinking computation.
Unlike many traditional sciences, AI researchers are not physically tethered to their equipment. This flexibility opens opportunities for reducing environmental impact that go beyond code and hardware. Simple, yet powerful choices — such as scheduling compute-intensive tasks during periods of low-carbon electricity or running experiments in regions with cleaner energy grids — can meaningfully cut carbon footprints. Cross-institutional collaboration, speakers noted, can further enable access to greener compute, provided such arrangements are equitable and do not replicate extractive practices, especially in low- and middle-income countries. The message was clear: rethinking when, where, and how computation occurs can deliver measurable sustainability gains without requiring entirely new algorithms.
AI for climate and environmental science.
Beyond the visible “demo-ready” applications, AI is quietly transforming climate research. Participants showcased how AI improves predictions for extreme weather events, helps downscale global climate models to actionable local forecasts, and refines long-term projections, such as sea-level rise decades into the future. These contributions may lack immediate visibility, but their implications for policy, infrastructure planning, and disaster preparedness are profound. Importantly, frugal, task-specific models frequently matched the performance of far larger systems, challenging the assumption that bigger always equates to better. Hybrid approaches — combining AI with physical models and large-scale simulators treated as data sources — were highlighted as a particularly effective strategy for navigating different temporal and spatial scales.
Community as infrastructure.
One of the workshop’s most striking insights was the central role of community. True, scalable impact in AI for sustainability rarely stems from individual papers alone; it emerges from interdisciplinary ecosystems. These ecosystems are composed of machine-learning researchers embedded in climate labs, climate scientists acquiring AI expertise, and collaborative projects that gradually evolve into enduring research centers. Workshop participants emphasised that building these networks does not always require large grants or formal programs; personal connections, mentoring relationships, and informal conversations often lay the foundation for long-term progress. The lesson was clear: community is infrastructure — without it, technical innovation alone cannot translate into lasting societal benefit.
Industry collaboration and accountability.
Applied AI research is frequently guided by the concrete performance requirements of industrial partners. While this orientation provides clarity, relevance, and a sense of accountability, it also introduces potential pitfalls if optimization goals are misaligned with broader sustainability objectives. Speakers repeatedly stressed the importance of transparency: without accurate reporting of energy consumption and environmental costs in research papers, funding proposals, and deployments, meaningful evaluation of AI’s trade-offs is impossible. Responsible AI, they argued, requires aligning technical ambition with ethical and environmental responsibility.
Towards responsible AI.
Taken together, the workshop reinforced a powerful, overarching message: AI’s potential can only be harnessed responsibly when efficiency, ethics, and community-building advance together. Reducing energy consumption is necessary but insufficient; progress depends equally on how AI is used, how research communities collaborate, and how stakeholders — from universities to industry — define and pursue sustainability goals. By confronting these tensions head-on, the workshop offered a roadmap not just for smarter AI, but for AI that serves society and the planet in lasting, measurable ways.
ELIAS’s participation at EurIPS 2025 reflected its broader mission: fostering AI innovation that is responsible, sustainable, and socially grounded.
At the Start-Up Village, this meant supporting opportunity, entrepreneurship, and dialogue. At the Rethinking AI workshop, it meant creating space for critical reflection — acknowledging tensions, trade-offs, and uncertainties rather than offering simplistic answers.
As the AI community continues to grow, these conversations are no longer optional. The challenge ahead is not just to build more powerful systems, but to decide what they are for, how they are used, and at what cost.
The inaugural AI in Science Summit 2025, held on 3–4 November in Copenhagen under the Danish Presidency of the Council of the EU, marked a defining moment for Europe’s scientific and technological landscape. Co-hosted by the European Commission and the University of Copenhagen, the Summit brought together around 1,000 researchers, innovators, policymakers, and industry leaders to shape a shared vision for Europe’s future in AI-enabled science.
At the heart of this milestone event was the launch of RAISE — the Resource for AI Science in Europe, a flagship initiative designed to pool European talent, data, compute, and funding to accelerate world-class AI research for scientific discovery. Alongside the plenary programme, ELIAS and the ELIAS Alliance led the thematic workshop “Science for AI”, exploring how foundational scientific advances fuel breakthroughs in AI research—and how Europe can build a globally competitive ecosystem grounded in openness, collaboration, and excellence.
Serge Belongie (President of the ELLIS Society; Director, Pioneer Centre for AI) highlighted the role of scientific progress in enabling transformative AI systems, underlining Europe’s unique positioning to lead globally through cross-border excellence networks such as ELLIS (European Laboratory for Learning and Intelligent Systems), which he described as the “backbone of European AI research.” Contributions from researchers including Zhijing Jin (MPI), Antoine Bosselut (EPFL), and Nicu Sebe (ELIAS / UniTN) showcased frontier advances in AI methods and their scientific underpinnings—from reasoning and language models to multimodality and sustainable AI
Max Welling (CuspAI) was awarded the inaugural ELIAS Sciencepreneurship Award for his exemplary leadership at the interface of academia, deep-tech entrepreneurship, and materials discovery for sustainable innovation. The Summit also featured several exchanges between the EU Commissioner for Start-ups, Research, and Innovation, Ekaterina Zaharieva, and the ELIAS Consortium Members.
A group of world-leading European AI researchers represented by Prof. Matthias Bethge (ELIAS Alliance) & Dr. Arnout Devos (ELLIS / ELIAS Node Zurich) met with Ekaterina Zaharieva and Henna Virkkunen, Executive Vice-President of the European Commission for Technological Sovereignty, Security, and Democracy. They discussed the need for enabling more speed and scale of EU-wide innovation funding & coordination for turning the existing cutting-edge AI research excellence into innovation value for the economy and society.
Moreover, the head of the ELIAS Node Copenhagen serial entrepreneur Mikkel Hippe hosted the meeting between Zaharieva and Danish Startups. The discussion highlighted how Europe can scale science-based startups, foster deep-tech innovation, and translate research excellence into tangible societal and economic impact.
Panels and discussions throughout the Summit emphasized public AI, the role of RAISE, and how Europe can unlock scientific discovery for societal benefit through shared infrastructures and open research. Nathan Benaich (AIR Street Capital), Jessica Montgomery (ai@com), and Tim Rocktäschel (DeepMind) explored the need for experimentation-friendly environments, accessible compute, vibrant research communities, and mechanisms to break out of current limitations in AI progress.
A powerful keynote by Yoshua Bengio (Mila) offered a sobering reminder: Europe—and the world—are not yet ready to deploy truly trustworthy, human-centric AI at scale. The message was clear: Europe must act fast, invest in foundational research, and build collaborative ecosystems to ensure AI delivers scientific, societal, and economic benefits.
The Summit showcased 79 posters and demos from top European researchers, vibrant exchanges with startups and deep-tech entrepreneurs, and even the premiere of The Best Option, a film exploring the human implications of AI augmentation. Across every session, participants celebrated collaboration, scientific excellence, and innovation with the shared goal of building a trustworthy and sustainable AI future.
With initiatives like RAISE, the networks of ELLIS and ELIAS, and strong engagement from the European Commission, Europe is positioning itself to lead globally in AI for science. The Summit made one thing clear: Europe’s AI community is ready to move fast, at scale, and together, leveraging its scientific strength to accelerate discovery, drive innovation, and create societal impact.
ELIAS Publishes Strategic Research Agenda: Building Europe’s Pathways to Responsible and Sustainable AI
Trento, Italy – October 3, 2025
The ELIAS Consortium announced the publication of its landmark report, Pathways to Responsible and Sustainable AI – Strategic Research Agenda (SRA). Developed under the EU-funded European Lighthouse of AI for Sustainability (ELIAS) initiative, the document sets out a bold, long-term roadmap for AI research grounded in environmental, societal, and ethical values, demonstrating how Europe can lead globally in AI that delivers tangible benefits for people, society, and the planet.
Artificial intelligence is increasingly seen as a cornerstone of Europe’s digital and green transitions. It offers immense potential for addressing global challenges, from climate change and resource efficiency to democratic resilience and economic inclusion, but also raises pressing questions around energy consumption, fairness, privacy, and trust. The ELIAS SRA responds by defining long-term research priorities and actionable pathways that embed sustainability into the entire AI lifecycle, from model design to deployment and governance.
As Project Coordinator Nicu Sebe emphasises, the Strategic Research Agenda is more than a research roadmap—it is a “compass for Europe to lead globally in AI that is open, responsible, and impactful—not just for today’s performance, but for the wellbeing of society and the planet in the future.” By grounding AI in sustainability, trust, and human values, the agenda ensures that European AI research delivers meaningful benefits for both people and the planet.
The SRA defines five interconnected pillars across three core dimensions of Sustainable AI: the planet, society, and the individual. These pillars are supported by two cross-cutting enablers, Fostering Scientific Excellence and Entrepreneurship & Tech Transfer, ensuring ELIAS research delivers practical, scalable, and socially responsible AI innovations.
By translating cutting-edge AI research into tangible societal benefits—through competitions, PhD programs, startups, and Alliance—ELIAS is already shaping a more sustainable and inclusive future. Highlighting this vision, Matthias Bethge, Co-head of the ELIAS Node Tubingen, notes: “Europe has a unique opportunity to demonstrate that AI innovation can strengthen open, pluralistic societies and resilience. Through the ELIAS Alliance, we build on academic excellence and education to inspire top talent to engage in value creation and to help build societies that are more confident, inclusive, and capable of tackling global challenges together. ”
Together, the SRA and ISA provide a comprehensive roadmap, ensuring that AI research translates into tangible environmental, social, and economic benefits, while delivering policy-relevant, market-ready, and socially responsible solutions.
Alignment with European Priorities
The agenda is explicitly aligned with the European Green Deal, the Digital Decade, the forthcoming AI Act, and the UN Sustainable Development Goals. By embedding sustainability, trust, and social responsibility into the AI innovation pipeline, ELIAS strengthens Europe’s technological sovereignty and positions the region as a global leader in responsible AI.
The ELIAS Strategic Research Agenda: Pathways to Responsible and Sustainable AI is now available here.
Acknowledgements
The publication of the Strategic Research Agenda reflects the collaboration of the entire ELIAS consortium and its wider community. We thank all ELIAS partners and contributors for their dedication.
ELIAS Pathways to Responsible and Sustainable AI – Strategic Research Agenda (SRA)
The ELLIS PhD program is a key pillar of the ELLIS initiative whose goal is to foster and educate the best talent in machine learning and related research areas by pairing outstanding students with leading academic and industrial researchers in Europe. The program also offers a variety of networking and training activities, including summer schools and workshops. Each PhD student is co-supervised by one ELLIS fellow/scholar or unit member and one ELLIS fellow/scholar, unit member or ELLIS member based in different European countries. Students conduct an exchange of at least 6 months with the international advisor during their degree. One of the advisors may also come from industry, in which case the student will collaborate closely with the industry partner, and spend min. 6 months conducting research at the industrial lab. In our recent interdisciplinary track, students are co-supervised by an ELLIS fellow/scholar and a tenured faculty (if they are not an ELLIS fellow/scholar themselves), whose main expertise is different than machine learning/AI (for instance, biology, law or social sciences and humanities). For more information, the specific requirements for each track can be found down below or here.
AutoML • Bayesian & Probabilistic Learning • Bioinformatics • Causality • Computational Neuroscience • Computer Graphics • Computer Vision • Deep Learning • Earth & Climate Sciences • Health • Human Behavior, Psychology & Emotion • Human Computer Interaction • Human Robot Interaction • Information Retrieval • Interactive & Online Learning • Interpretability & Fairness • Law & Ethics • Machine Learning Algorithms • Machine Learning Theory • ML & Sustainability • ML in Chemistry & Material Sciences • ML in Finance • ML in Science & Engineering • ML Systems • Multi-agent Systems & Game Theory • Natural Language Processing • Optimization & Meta Learning • Privacy • Quantum & Physics-based ML • Reinforcement Learning & Control • Robotics • Robust & Trustworthy ML • Safety • Security, Synthesis & Verification • Symbolic Machine Learning • Unsupervised Learning • Tech transfer & Entrepreneurship • Social Sciences and Humanities
ELIAS offers an exceptional opportunity to engage in cutting-edge research across various impactful fields. These include Robust and Trustworthy Machine Learning, ensuring the development of secure and resilient AI systems; Machine Learning in Chemistry and Material Sciences, applying AI to uncover new solutions; Interpretability and Fairness, focusing on transparency and equity in AI models; ML and Sustainability, leveraging AI to address global environmental challenges; Earth and Climate Sciences, advancing understanding of climate change; and Law and Ethics, exploring the legal and moral dimensions of AI. This is a unique chance to contribute to critical interdisciplinary research at the forefront of technology.
Interested candidates should apply online through the ELLIS application portal by October 31, 2025, 23:59 AoE. Applicants first need to register on the portal. After registering, applicants will receive their login details for the portal and can submit their application via apply.ellis.eu. Please read our FAQs and webpage before applying, as well as the details below. Only complete applications will be considered.
- October 1 2025: Application portal opens
- October 31, 2025, 23:59 (AoE): application deadline (firm)
- November/December 2025: review stage
- January/February 2026: interview stage
- Late February/March 2026: decisions
- Program start: there is no common start for the PhD (depends on the advisor/institution)
ELLIS values diversity and seeks to increase the number of women in areas where they are underrepresented. We therefore explicitly encourage women to apply. We are also committed to recruiting more people living with disabilities and strongly encourage them to apply.
Admission to the program is competitive. In a typical round, less than 5% of all registered applicants, and between 5-10% of eligible applicants are accepted. Based on previous rounds, we expect that about 150 advisors in the ELLIS network will be participating in the upcoming round.
Advisor and time-sharing requirements
ELLIS PhDs are co-supervised by one ELLIS fellow/scholar/unit member and one ELLIS fellow/scholar/member, both based in Europe. During the selection process, the main focus will be on finding a match with the primary advisor. A list of available advisors with open positions will be published on the application portal. Only advisors who participate in the current call (i.e. those that are listed on the portal in the “Advisor List”) are eligible to recruit ELLIS PhDs.
Finding a co-advisor can be done at a later stage, up to 5 months after acceptance to the program. The primary advisor and student decide on a co-advisor together.
Exchange and time-sharing:
As part of the application procedure, you will be able to indicate preferences for specific research areas and potential supervisors who are participating in this year’s current recruiting round. The list of ELLIS advisors who are recruiting and their research areas will be available on the application portal. For questions about eligibility, please see our FAQs. Note that in the current call, ELLIS faculty are looking to recruit new students; if you’re already doing a PhD with an ELLIS advisor and are interested in the ELLIS PhD program, please read this FAQ.
Important note:
Some of the listed advisors on the portal will mention in their profile that their institution requires you to apply in parallel through their official channels (referred to as “Parallel application necessary? Yes.”).
Some institutions do not accept graduate students throughout the year, but have strict deadlines for applying to their graduate programs (for instance, December or earlier) which overlap with the call’s timeline. In this case, you should not wait until the ELLIS selection procedure has ended, but apply in parallel through the advisor’s institution. If you do not, or fail to meet the deadline, you might have to wait until the next year to enrol as a PhD student, despite having received an offer prior. In a worst case scenario, this might even jeopardise your offer as the advisor might not be in a position to wait/will lose the funding for this particular PhD position if the deadline is not met.
If unsure, visit the website of the institution or contact your preferred advisor. It’s safest to apply to both programs in parallel to avoid any delays in the starting date of your contract.
The application consists of three parts:
(1) Application form. In the application form, you provide your personal details, indicate your preferences for research fields and advisors and list relevant degrees and experience.
(2) Documents. You will need to upload the following documents (as PDFs):
(3) References. You will also be asked to provide the contact details of min. 2 referees (max. 3) who have agreed to submit a reference. You should contact your referees personally before you send a formal request via the portal. Contacting your referees directly beforehand ensures that they are willing to write a strong recommendation and will have the time to do so before the specified deadline. Recommendation letters must be requested via the portal before the application deadline on October 31(*). After you’ve requested a reference, the referee will be contacted via the system to submit their recommendation by November 25.
Choose your referees carefully and think about who is best qualified to write you a strong recommendation. Referees must be able to assess your academic performance and research abilities. At least two of the referees should be professionally established at the level of independent investigator, principal scientist, group leader, lecturer or above. A maximum of one reference may come from a postdoc. We recommend that you include the principal investigator who supervised your thesis work. Professional references are accepted, as long as the referee can comment on your academic/research abilities (e.g. referees from industrial research labs). References from PhD students or class mates are not accepted.
Choose your referees carefully and think about who is best qualified to write you a strong recommendation. Referees must be able to assess your academic performance and research abilities. At least two of the referees should be professionally established at the level of independent investigator, principal scientist, group leader, lecturer or above. A maximum of one reference may come from a postdoc. We recommend that you include the principal investigator who supervised your thesis work. Professional references are accepted, as long as the referee can comment on your academic/research abilities (e.g. referees from industrial research labs). References from PhD students or class mates are not accepted.
(*) Applicants must submit their reference requests via the portal BEFORE the general application deadline on October 31, 2025. After receiving the request, referees have time until November 25, 2025 to submit their recommendation. Note that it will NOT be possible for applicants to request a reference or to send a reminder via the portal after the application deadline. Also, all the other components of the application (application form, documents) need to be completed before October 31, in order to be considered. Applications that are missing the necessary number of reference letters by November 25, 23:59 AoE, are incomplete and are no longer considered in the selection process.
phd@ellis.eu – PhD Coordination Office
The ELLIS PhD Program has received funding from the European Union’s Horizon 2020 research and innovation programme under ELISE Grant Agreement No. 951847 (2020 – 2024), and is continued with funding from the Horizon Europe research and innovation programme under ELIAS grant agreement number 101120237 (2023 – 2027). The program is also expanded by the EU-funded project ELSA under grant agreement number 101070617 (2022 – 2025).