Building Europe’s Pathways to Responsible and Sustainable AI

Building Europe’s Pathways to Responsible and Sustainable AI

Press Release

ELIAS Publishes Strategic Research Agenda: Building Europe’s Pathways to Responsible and Sustainable AI

Trento, Italy – October 3, 2025

The ELIAS Consortium announced the publication of its landmark report, Pathways to Responsible and Sustainable AI – Strategic Research Agenda (SRA). Developed under the EU-funded European Lighthouse of AI for Sustainability (ELIAS) initiative, the document sets out a bold, long-term roadmap for AI research grounded in environmental, societal, and ethical values, demonstrating how Europe can lead globally in AI that delivers tangible benefits for people, society, and the planet.

Artificial intelligence is increasingly seen as a cornerstone of Europe’s digital and green transitions. It offers immense potential for addressing global challenges, from climate change and resource efficiency to democratic resilience and economic inclusion, but also raises pressing questions around energy consumption, fairness, privacy, and trust. The ELIAS SRA responds by defining long-term research priorities and actionable pathways that embed sustainability into the entire AI lifecycle, from model design to deployment and governance.

As Project Coordinator Nicu Sebe emphasises, the Strategic Research Agenda is more than a research roadmap—it is a “compass for Europe to lead globally in AI that is open, responsible, and impactful—not just for today’s performance, but for the wellbeing of society and the planet in the future.” By grounding AI in sustainability, trust, and human values, the agenda ensures that European AI research delivers meaningful benefits for both people and the planet.

Five Pillars of the ELIAS SRA

The SRA defines five interconnected pillars across three core dimensions of Sustainable AI: the planet, society, and the individual. These pillars are supported by two cross-cutting enablers, Fostering Scientific Excellence and Entrepreneurship & Tech Transfer, ensuring ELIAS research delivers practical, scalable, and socially responsible AI innovations.

Matthias Bethge
  • AI for a Sustainable Planet – creating hybrid AI models that integrate scientific knowledge to accelerate clean energy, sustainable materials, and climate resilience, while reducing AI’s own carbon footprint.
  • AI for a Sustainable Society – developing systems to safeguard democracy, counter disinformation, support inclusive prosperity, and foster fair coordination of shared resources.
  • Trustworthy AI for Individuals – ensuring AI systems are fair, transparent, privacy-preserving, and attentive to human cognition and diversity of needs.
  • Fostering Scientific Excellence – expanding PhD programs, fellowships, and cross-border collaborations to strengthen Europe’s AI research community and train the next generation of talent.
  • Entrepreneurship & Tech Transfer (Sciencepreneurship) – bridging the gap between research and real-world application through open calls, accelerators, internships, and sciencepreneurship initiatives.

By translating cutting-edge AI research into tangible societal benefits—through competitions, PhD programs, startups, and Alliance—ELIAS is already shaping a more sustainable and inclusive future. Highlighting this vision, Matthias Bethge, Co-head of the ELIAS Node Tubingen, notes: “Europe has a unique opportunity to demonstrate that AI innovation can strengthen open, pluralistic societies and resilience. Through the ELIAS Alliance, we build on academic excellence and education to inspire top talent to engage in value creation and to help build societies that are more confident, inclusive, and capable of tackling global challenges together. ”

Together, the SRA and ISA provide a comprehensive roadmap, ensuring that AI research translates into tangible environmental, social, and economic benefits, while delivering policy-relevant, market-ready, and socially responsible solutions.

Alignment with European Priorities

The agenda is explicitly aligned with the European Green Deal, the Digital Decade, the forthcoming AI Act, and the UN Sustainable Development Goals. By embedding sustainability, trust, and social responsibility into the AI innovation pipeline, ELIAS strengthens Europe’s technological sovereignty and positions the region as a global leader in responsible AI.

The ELIAS Strategic Research Agenda: Pathways to Responsible and Sustainable AI is now available here.

Acknowledgements

The publication of the Strategic Research Agenda reflects the collaboration of the entire ELIAS consortium and its wider community. We thank all ELIAS partners and contributors for their dedication.

Press Release: ELIAS Pathways to Responsible and Sustainable AI Strategic Research Agenda – 03/10/2025

ELIAS Pathways to Responsible and Sustainable AI – Strategic Research Agenda (SRA)

Contact

Aygun Garayeva, PR Manager, ELIAS

Nicu Sebe, Coordinator, ELIAS

elias-coordination@unitn.it

ELLIS PhD Program: Call for applications 2025

ELLIS PhD Program: Call for applications 2025

ELLIS PhD Program: Call for applications 2025

The ELLIS PhD program is a key pillar of the ELLIS initiative whose goal is to foster and educate the best talent in machine learning and related research areas by pairing outstanding students with leading academic and industrial researchers in Europe. The program also offers a variety of networking and training activities, including summer schools and workshops. Each PhD student is co-supervised by one ELLIS fellow/scholar or unit member and one ELLIS fellow/scholarunit member or ELLIS member based in different European countries. Students conduct an exchange of at least 6 months with the international advisor during their degree. One of the advisors may also come from industry, in which case the student will collaborate closely with the industry partner, and spend min. 6 months conducting research at the industrial lab. In our recent interdisciplinary track, students are co-supervised by an ELLIS fellow/scholar and a tenured faculty (if they are not an ELLIS fellow/scholar themselves), whose main expertise is different than machine learning/AI (for instance, biology, law or social sciences and humanities). For more information, the specific requirements for each track can be found down below or here.

Research Areas

AutoML • Bayesian & Probabilistic Learning • Bioinformatics • Causality • Computational Neuroscience • Computer Graphics • Computer Vision • Deep Learning • Earth & Climate Sciences • Health • Human Behavior, Psychology & Emotion • Human Computer Interaction • Human Robot Interaction • Information Retrieval • Interactive & Online Learning • Interpretability & Fairness • Law & Ethics • Machine Learning Algorithms • Machine Learning Theory • ML & Sustainability • ML in Chemistry & Material Sciences • ML in Finance • ML in Science & Engineering • ML Systems • Multi-agent Systems & Game Theory • Natural Language Processing • Optimization & Meta Learning • Privacy • Quantum & Physics-based ML • Reinforcement Learning & Control • Robotics • Robust & Trustworthy ML • Safety • Security, Synthesis & Verification • Symbolic Machine Learning • Unsupervised Learning • Tech transfer & Entrepreneurship • Social Sciences and Humanities

ELIAS offers an exceptional opportunity to engage in cutting-edge research across various impactful fields. These include Robust and Trustworthy Machine Learning, ensuring the development of secure and resilient AI systems; Machine Learning in Chemistry and Material Sciences, applying AI to uncover new solutions; Interpretability and Fairness, focusing on transparency and equity in AI models; ML and Sustainability, leveraging AI to address global environmental challenges; Earth and Climate Sciences, advancing understanding of climate change; and Law and Ethics, exploring the legal and moral dimensions of AI. This is a unique chance to contribute to critical interdisciplinary research at the forefront of technology.

How to apply

Interested candidates should apply online through the ELLIS application portal by October 31, 2025, 23:59 AoE. Applicants first need to register on the portal. After registering, applicants will receive their login details for the portal and can submit their application via apply.ellis.eu. Please read our FAQs and webpage before applying, as well as the details below. Only complete applications will be considered.

Important dates:
  • October 1 2025: Application portal opens
  • October 31, 2025, 23:59 (AoE): application deadline (firm)
  • November/December 2025: review stage
  • January/February 2026: interview stage
  • Late February/March 2026: decisions
  • Program start: there is no common start for the PhD (depends on the advisor/institution)
Diversity & Inclusion

ELLIS values diversity and seeks to increase the number of women in areas where they are underrepresented. We therefore explicitly encourage women to apply. We are also committed to recruiting more people living with disabilities and strongly encourage them to apply.

Admission to the program is competitive. In a typical round, less than 5% of all registered applicants, and between 5-10% of eligible applicants are accepted. Based on previous rounds, we expect that about 150 advisors in the ELLIS network will be participating in the upcoming round.

Additional information

Advisor and time-sharing requirements

ELLIS PhDs are co-supervised by one ELLIS fellow/scholar/unit member and one ELLIS fellow/scholar/member, both based in Europe. During the selection process, the main focus will be on finding a match with the primary advisor. A list of available advisors with open positions will be published on the application portal. Only advisors who participate in the current call (i.e. those that are listed on the portal in the “Advisor List”) are eligible to recruit ELLIS PhDs.

  • One of the advisors can be a CIFAR Fellow/Canadian AI chair holder respectively based in the US or in Canada, as long as the other advisor is an ELLIS Fellow/Scholar based in Europe; in that case the degree may come from the non-European institution.

Finding a co-advisor can be done at a later stage, up to 5 months after acceptance to the program. The primary advisor and student decide on a co-advisor together.

Exchange and time-sharing:

  • Academic Track: During their appointment, the PhD student must visit the secondary advisor in a different European* country for min. 6 months (partitioning of the time is flexible). *Exception: In the academic track, the co-supervisor can be based outside of Europe, but must in that case be an ELLIS fellow or scholar.
  • Industry Track: The candidate will spend a minimum of 50% of their time at the academic partner institution and a minimum of 6 months with the industry partner. This can be accumulative (e.g. 2 days per week) or consecutive. The industry partner, industrial research lab, and industry advisor must all be based in Europe (regardless of HQ location), but can otherwise be in the same country/city as the academic partner.
  • Interdisciplinary Track: The PhD student will spend a minimum of 50% of their time with their primary advisor and a minimum of 6 months** with the secondary advisor (expert in a field unrelated to machine learning/AI). This can be accumulative (e.g. 2 days per week) or consecutive. The secondary advisor must be based in Europe, but can otherwise be in the same country/city/university as the academic partner. **Exception: If both advisors are located in the same university (different departments), then no 6-month visit is required, but the PhD candidate should meet regularly with their second advisor. Additionally, in that case, the student is then encouraged to spend at least one month abroad during their thesis.
  • The PhD degree must come from a European institution.
Application process

As part of the application procedure, you will be able to indicate preferences for specific research areas and potential supervisors who are participating in this year’s current recruiting round. The list of ELLIS advisors who are recruiting and their research areas will be available on the application portal. For questions about eligibility, please see our FAQs. Note that in the current call, ELLIS faculty are looking to recruit new students; if you’re already doing a PhD with an ELLIS advisor and are interested in the ELLIS PhD program, please read this FAQ.

Important note:

Some of the listed advisors on the portal will mention in their profile that their institution requires you to apply in parallel through their official channels (referred to as “Parallel application necessary? Yes.”).

Some institutions do not accept graduate students throughout the year, but have strict deadlines for applying to their graduate programs (for instance, December or earlier) which overlap with the call’s timeline. In this case, you should not wait until the ELLIS selection procedure has ended, but apply in parallel through the advisor’s institution. If you do not, or fail to meet the deadline, you might have to wait until the next year to enrol as a PhD student, despite having received an offer prior. In a worst case scenario, this might even jeopardise your offer as the advisor might not be in a position to wait/will lose the funding for this particular PhD position if the deadline is not met.

If unsure, visit the website of the institution or contact your preferred advisor. It’s safest to apply to both programs in parallel to avoid any delays in the starting date of your contract.

The application consists of three parts:

(1) Application form. In the application form, you provide your personal details, indicate your preferences for research fields and advisors and list relevant degrees and experience.

(2) Documents. You will need to upload the following documents (as PDFs):

  • A two-page motivational letter in which you (1) explain why you want to earn a PhD within the ELLIS network and (2) include a research statement describing past research projects and interests as well as the future direction of your research. (Optional: In addition to (1) and (2), you may also mention specific advisors you are interested in working with.)
  • Your current CV that details your educational background, work experience, full citations of any publications you may have, any research presentations you have given, and any awards you have received.
  • Unofficial transcripts of all of your university studies (BSc, MSc), as well as a translation into English.
  • Optionally, additional documents such as a thesis, published paper, or project portfolio, (or parts thereof) as a single PDF (<20 MB).

(3) References. You will also be asked to provide the contact details of min. 2 referees (max. 3) who have agreed to submit a reference. You should contact your referees personally before you send a formal request via the portal. Contacting your referees directly beforehand ensures that they are willing to write a strong recommendation and will have the time to do so before the specified deadline. Recommendation letters must be requested via the portal before the application deadline on October 31(*). After you’ve requested a reference, the referee will be contacted via the system to submit their recommendation by November 25.

Choose your referees carefully and think about who is best qualified to write you a strong recommendation. Referees must be able to assess your academic performance and research abilities. At least two of the referees should be professionally established at the level of independent investigator, principal scientist, group leader, lecturer or above. A maximum of one reference may come from a postdoc. We recommend that you include the principal investigator who supervised your thesis work. Professional references are accepted, as long as the referee can comment on your academic/research abilities (e.g. referees from industrial research labs). References from PhD students or class mates are not accepted.

Choose your referees carefully and think about who is best qualified to write you a strong recommendation. Referees must be able to assess your academic performance and research abilities. At least two of the referees should be professionally established at the level of independent investigator, principal scientist, group leader, lecturer or above. A maximum of one reference may come from a postdoc. We recommend that you include the principal investigator who supervised your thesis work. Professional references are accepted, as long as the referee can comment on your academic/research abilities (e.g. referees from industrial research labs). References from PhD students or class mates are not accepted.

(*) Applicants must submit their reference requests via the portal BEFORE the general application deadline on October 31, 2025. After receiving the request, referees have time until November 25, 2025 to submit their recommendation. Note that it will NOT be possible for applicants to request a reference or to send a reminder via the portal after the application deadline. Also, all the other components of the application (application form, documents) need to be completed before October 31, in order to be considered. Applications that are missing the necessary number of reference letters by November 25, 23:59 AoE, are incomplete and are no longer considered in the selection process.

Contact

phd@ellis.eu – PhD Coordination Office

 

The ELLIS PhD Program has received funding from the European Union’s Horizon 2020 research and innovation programme under ELISE Grant Agreement No. 951847 (2020 – 2024), and is continued with funding from the Horizon Europe research and innovation programme under ELIAS grant agreement number 101120237 (2023 – 2027). The program is also expanded by the EU-funded project ELSA under grant agreement number 101070617 (2022 – 2025).

AI-Based Modeling for Energy-Efficient Buildings Challenge

AI-Based Modeling for Energy-Efficient Buildings Challenge

Challenge Details

About the Challenge

The challenge focuses on developing AI-based prediction for chilled water plants in large buildings to:

  • Use machine learning to accurately predict HVAC (Heating, Ventilation and Air Conditioning) system loads, in particular cooling demand
  • Bridge the gap between academic AI and real-world deployment
  • Promote transparency, generalisability, and feasibility of solutions for real building operations

💡 The load prediction model can serve as basis for future building control optimisation methods. After this competition about the load prediction task, we are planning a second competition about improving the current building control strategy, based on the present load prediction task. Stay tuned!

Key Dates
  • Registration starts: September 1, 2025
  • Submission deadline: November 16, 2025
Dataset & Resources
Z
Expected Outcomes
  • Participants will:
    • Develop load prediction models
    • Gain visibility through public sharing of top solutions and final workshop
    • Have opportunities for publication and recognition within the ELIAS project network
How to Participate

Where: Kaggle

Who: Open to individuals and teams from academia, startups, and industry

Steps:

  1. Register on Kaggle and accept the rules
  2. Access the datasets and baseline tools
  3. Submit your results as per the format
Partners and Support
  • ELIAS project consortium members
  • Bosch Corporate Research and Global Real Estate units
  • Universities and Research Partners
  • Kaggle (as challenge platform)
t
FAQ
    • Q: Who is eligible to participate?
      • A: Anyone, including students, professionals, and researchers, can register on Kaggle and join.
    • Q: Can teams join?
      • A: Yes, team participation is encouraged.
    • Q: What tools can be used?
      • A: Any tools/languages that support reproducible machine learning
    • Q: Is the competition free?
      • A: Yes, participation is free of charge.
Exploring the Future of Foundation Models: Key Insights from the TDW on Foundation Models

Exploring the Future of Foundation Models: Key Insights from the TDW on Foundation Models

On 10 July 2025, the ELIAS, ELLIOT, and ELSA projects co-organised a Theme Development Workshop (TDW) focused on Foundation Models in Thessaloniki, Greece. The event was hosted by the Information Technologies Institute of the Centre for Research and Technology Hellas (CERTH). The hybrid-format workshop brought together over 90 in-person participants and more than 120 online attendees, including AI researchers, industry professionals, and students. The four-hour event featured three thematic sessions, each spotlighting cutting-edge research, real-world applications, and critical discussions around the development and deployment of foundation models.

While the global narrative around Foundation Models—powerful, resource-intensive AI systems such as GPT-4 and DALL·E—has largely centred on technology giants, this workshop highlighted Europe’s growing momentum in developing open, community-driven alternatives, with notable advances in training, applications, and security.

A European Perspective on Foundation Models

The event commenced with remarks from Elisa Ricci (University of Trento & Fondazione Bruno Kessler), who outlined the workshop’s objectives: to explore the technical, societal, and application-related aspects of Foundation Models—large-scale AI systems that generalise across tasks and domains. Ricci emphasised the importance of European collaboration, public supercomputing infrastructureopen science, and inclusive design in shaping the future of these models.

European grassroots efforts are thriving. Projects such as LAION and EleutherAI demonstrate that large-scale, open-source datasets and models are not only possible but already successful. As highlighted in the workshop, tools such as OpenCLIP (an open alternative to OpenAI’s CLIP) are widely downloaded and used globally, demonstrating the impact of strong ideas and collaboration.

Session 1: Training Foundation Models

Moderated by Elisa Ricci, this session explored scaling laws, efficiency, and data strategies in training large AI models.

Jenia Jitsev (Jülich Supercomputing Centre & LAION) delivered a keynote on scaling laws and generalisation in open foundation models. He emphasised the importance of reproducible scaling laws to predict performance, compare learning procedures and systematically search for learning with stronger generalisation and transferability, highlighting work on OpenCLIP, Re-LAION, openMaMMUT and OpenThinker. A key message was that academics can go surprisingly far and reach up to so-called hyperscaler closed labs in the industry with access to vast public supercomputing resources, strong ideas, and a collaborative, transparent open-source spirit – a compelling case for providing further increased support to open, academic AI efforts.

Frank Hutter (Prior Labs & ELLIS Institute Tübingen) introduced recent innovations in Tabular Foundation Models, a domain often overlooked in mainstream deep learning. He showcased TabPFN and TabPFN v2, which outperform traditional machine learning approaches such as gradient-boosted trees in sectors like finance and healthcare. Hutter demonstrated how synthetic data generation can enable powerful pretraining, proving that domain-specific foundation models have broad potential beyond NLP and vision.

Cees Snoek (University of Amsterdam) presented NeoBabel, a multilingual foundation model for image generation that natively understands six languages: English, Chinese, Dutch, French, Hindi, and Persian. NeoBabel tackles a major challenge: multilingual multimodal data is scarce. The team enhanced an English-only dataset using LLMs for translation and detailed recaptioning, thereby bootstrapping a multilingual dataset from scratch. The model is fully open-source, offering not only checkpoints but also a curated dataset and an extensible toolkit for reproducible research.

📄 NeoBabel paper: arXiv link

🔗View Slides

Session 2: Ethical & Safe Foundation Models

Chaired by Lorenzo Baraldi (University of Modena and Reggio Emilia), the second session explored the security and societal risks associated with foundation models.

Mario Fritz (CISPA Helmholtz Center for Information Security) delivered a comprehensive talk on the security and safety landscape, covering issues such as prompt injection attacks, trust in code-generation models, and the dynamics of agent-based negotiation and collaboration. He explored how foundation models can amplify both risks and opportunities, stressing the need for transparent alignment strategies and automated red teaming.

Session 3: Applications of Foundation Models

Moderated by Dimosthenis Karatzas (Computer Vision Centre – CVC-CERCA & Autonomous University of Barcelona), this session showcased real-world applications of foundation models in robotics, video understanding, and industrial domains.

 

Marc Pollefeys (Microsoft & ETH Zurich) provided a comprehensive overview of Spatial AI foundation models in 3D environments. His talk ranged from robot manipulation to world models for autonomous driving, highlighting progress on the recent GEM models and their application in real-world robotics and perception.

Matthieu Cord (University of Sorbonne & VALEO) presented innovations in generative video pretraining through the VaVIM–VaVAM models. By reducing token counts and shifting from discrete to continuous tokens, the team achieved significant gains in both training speed and performance, pushing the boundaries of video foundation models for automotive and control systems.

Dario Garcia-Gasulla (Barcelona Supercomputing Centre) concluded the session with an in-depth look at training and evaluating LLMs using European HPC resources. He addressed post-training techniques, emphasising the importance of robust evaluation benchmarks, especially in regulated domains such as healthcare, chip design, and secure code generation. Garcia-Casulla also highlighted the increasing role of European AI Factories and Gigafactories in providing accessible compute and sovereign infrastructure for open AI development.

Building a Collaborative Future

Interactive Q&A segments followed each talk, with discussions centred on different topics. The event concluded with a wrap-up session led by Ricci, Karatzas, and Baraldi, who emphasised the importance of cross-sectoral collaboration to ensure that foundation models are developed in ways that are safe, inclusive, and aligned with European values.

Key takeaways from participants included:

  • Support for open, multilingual, and multimodal models
  • Investment in European computing infrastructure to level the playing field
  • Stronger integration of ethics, regulation, and societal perspectives in AI development
  • The value of synthetic data and innovative training techniques to democratise access

The workshop fostered vibrant discussion and provided valuable networking opportunities, culminating in a light lunch where participants continued to exchange ideas informally.

Watch the event recording here!

Looking Ahead

This TDW marked the second in a series of thematic workshops organised by ELIAS, in collaboration with the ELLIOT and ELSA networks. It sent a clear message that Europe is not merely observing the foundation model revolution—it is actively shaping it.

Despite ongoing challenges in data availability and computational resources, Europe’s commitment to open models, responsible design, and regional relevance is yielding tangible results. Initiatives like Laion, NeoBabel, OpenCLIP, and AI Factories illustrate that a distributed, democratic AI future is not only possible—it is already underway.

This workshop was jointly organised by the ELIAS , ELLIOT & ELSA projects.

European Large Open Multi-Modal Foundation Models For Robust Generalization On Arbitrary Data Streams – ELLIOT (GA No. 101214398 ) aims to enhance general-purpose AI by developing large-scale, open multimodal foundation models with strong spatio-temporal understanding. Led by top European academic and industrial labs from the ELLIS and LAION communities, the project targets underrepresented time-relevant modalities such as industrial time series, remote sensing, and health data. Both real and synthetic data will be used, sourced from consortium partners and European Data Spaces, with synthetic data generated using current and novel generative AI methods. European HPC resources are integrated to support large-scale model training. 

European Lighthouse on Secure and Safe AI – ELSA (GA No. 101070617) is a growing network of excellence that spearheads efforts in foundational safe and secure AI methodology research. ELSA’s founding members include European experts in all aspects of safe and secure AI, with particular focus on technical robustness, privacy preserving techniques and human agency and oversight. In addition, ELSA brings on board research and industry experts in six different sectors that are key application areas of safe and secure AI. ELSA builds on and extends the internationally recognised and excellently positioned ELLIS (European Laboratory for Learning and Intelligent Systems) network of excellence. 

AI Launchpad Batch #2

AI Launchpad Batch #2

Centralised Application Process

Say goodbye to the hassle of applying to multiple incubators. With AI Launchpad, you can apply through our central portal and gain access to a network of premier AI startup ecosystems.

A Network of AI Experts and Entrepreneurs

Leverage a pan-European network of AI experts and entrepreneurs. Access centralised learning from ten hubs, bridging the gap between science and entrepreneurship

Immersion and Expansion

Participate in either the Spring or Fall season by joining a curated 1–2 week visit to a leading European AI hub. The program is tailored to your market goals and maturity, connecting you with investors, corporate partners, and key ecosystem leaders. To maximise impact, you’ll also take part in targeted sessions before and after your visit.

Batch #2

Are you elegible?

Teams of 2-4 entrepreneurial students/researchers.

Completing higher education at a European University.

Aiming to develop AI technologies into marketable products/services.

Goal: Launch a VC-fundable start-up with European/global impact.

Our Programme

Visiting Hub Program

The two-week immersive program designed specifically for AI startups looking to navigate and strategize their entry into the European market. This initiative offers participants a deep dive into the unique challenges and opportunities of the European landscape, facilitating tailored growth and expansion strategies

Market Exploration
Network Building

Understand Market Dynamics

Analyse the current trends and economic conditions in the German and broader European markets.

Engage with Thought Leaders

Participate in discussions with leading thinkers in the AI and tech industries.

Identify Sector Opportunities

Pinpoint sectors with high growth potential and demand for your AI solutions.

Form Strategic Alliances

Explore opportunities for forming alliances with established businesses that can offer complementary strengths.

Assess Competitive Landscape

Evaluate local competition to strategize positioning of your AI startup.

Cultivate Relationships with Local Entrepreneurs

Forge connections with local startups and entrepreneurs for cross-cultural insights and collaboration.

Recognise Regulatory Challenges

Learn about specific regulatory requirements and hurdles in Europe that could impact market entry.

Initiate Investor Dialogues

Meet potential investors to discuss funding possibilities and gain financial insights.

 

The local accelerators

Participants can apply to one of our 10 local hubs to be part of a 1-2 week visiting hub program, allowing them to fully immerse themselves in a vibrant ecosystem. This period is crucial for deep engagement with the community and iterative refinement of innovative projects.

Choose between one of ten local accelerator hubs, each offering a unique and tailored programme for your start-up:

INiTS | Vienna, Austria
DTU Skylab | Lyngby, Denmark
Campus Founders | Heilbronn, Germany
ETH AI Center & Talent Kick | Zurich, Switzerland
KTH Innovation | Stockholm, Sweden
XPLORE Venture Creator | Munich, Germany
HPIELIAS Node Potsdam | Potsdam, Germany
CVC ELIAS Node Barcelona | Barcelona, Spain
Yes!Delft | Delft, Netherlands
Cyber Valley ELIAS Node Tübingen | Tübingen, Germany

How does it work?

Nomination

Startups are referred by local facilitators. No open calls, just quality.

Onboarding

Join 2–3 online workshops to align goals, prepare assets, and connect early.

Immersion

Spend 2 weeks in a new European hub, engage in curated meetings, events, and VC introductions.

What to Submit

  1. Your Startup Overview — Share a clear and compelling pitch deck or one-pager that highlights your vision, product, and market opportunity.
  2. Team Introduction Video (1 minute) — Introduce the people behind the startup. Keep it simple, genuine, and authentic. Record your video, upload it to Loom or YouTube, and include the link in your application.
  3. Nomination Form — Once you have your materials ready, complete the application form and submit your nomination.

Deadline: August 31, 2025 at 23:59 CET

We can’t wait to discover your story — and learn how your AI venture is shaping the future.

More info under: www.ai-launchpad.eu | For enquiries regarding nominations, programmes, or collaborations, please contact: