Safety measures concerning the Coronavirus COVID-19. Read more about the measures.

x

Europe's AI leadership: key priorities

Artificial intelligence (AI) is poised to transform every aspect of society and industry: how we work, transact, and consume.

A buzzword dominating today’s headlines and the conversations of technology experts and laypeople alike, AI is driving unprecedented progress in science and promises a much-needed boost in labour productivity. But it also raises new ethical concerns and unique risks to financial stability.

In this blog we suggest five key priorities for Europe on its path to AI leadership.

We build notably on the recent conference hosted by the European Stability Mechanism (ESM) on the impact of artificial intelligence on economies, capital markets, and organisations, which garnered diverse insights from public and private sector practitioners, experts, regulators, and academics.

1. AI may impact financial stability

Current estimates of AI's impact on productivity and economic growth vary widely, but there is a general agreement that its benefits will be unevenly distributed across countries, industry sectors, and societies. The AI value chain—from raw materials for semiconductors to data availability, computing power, and investment capacity—is concentrated mainly in the United States (US) and China, posing risks of dependency and instability for Europe. Unequal technological progress in AI could widen the digital divide and drive economic fragmentation.

AI also introduces new financial stability risks. As autonomous AI decision-making grows across all sectors, crises could unfold more quickly and on a larger scale. Traditional methods of managing financial stress may not be effective in an AI-driven economy. Additionally, because financial crises are unique, AI cannot reliably learn from past data, risking "hallucinations" and incorrect outcomes when crucial data is missing from its training datasets.

To reduce these newly emerging AI-related risks, Europe must proactively rethink its crisis management mechanisms and ensure preparedness by both the public and private sectors.

2. Responsible AI is an imperative

To harness AI's benefits and ensure its ethical and responsible use, global regulation and coordination are essential, while still allowing room for innovation. Addressing hard questions about AI risks must be done today, not tomorrow.

The definition of responsible AI usage varies across jurisdictions. Europe aims to regulate AI by defining acceptable practices aligned with European values through the European Union (EU) AI Act, which adopts a risk-based approach: prohibiting harmful AI usage and distinguishing between high, limited, and minimal-risk applications. The objective of this approach is to safeguard individuals and society from the potential harms of AI while harnessing the benefits of responsible, trustworthy and accountable AI.[1]

AI can be seen as both a new technology manageable with existing regulations[2] and a powerful enabler requiring further regulation. However, excessive regulation can stifle innovation. Thus, a balance is necessary. Building an EU-wide understanding of data and AI applications usage is needed before adopting additional regulations.

Global standard-setters like the Organisation for Economic Cooperation and Development , the Basel Committee, and the International Organisation of Securities Commissions should play a key role in ensuring a level playing field and finding the optimal trade-off between regulation and innovation.

3. Interdisciplinary education and workforce reskilling are top priorities

Europe must continue to invest in research and education to keep pace with AI developments. Interdisciplinarity should be at the forefront of European universities’ curriculums for Europe to continue being a strong pipeline for AI talents.[3]

Science, technology, engineering, and math (STEM) education should translate into STEM jobs in Europe, to avoid a brain-drain to other parts of the world. Currently, Europe holds only 2% of the global AI patents. [4]

But change is not coming in the next decade, it is already upon us today. It is therefore not enough to educate the next generation; the current workforce also needs to be upskilled: AI will not replace radiologists, but radiologists who use AI will replace those who don't. Organisations need to focus on training and building multi-disciplinary teams to enable effective deployment and use of AI. Executive and board level management should also be trained to understand the potential and risks of AI to enable their organisations to truly reap its benefits.

4. Europe’s startups and scaleups need access to well-developed European capital markets

Europe’s large companies are not far behind their US counterparts in AI adoption. European AI research capabilities also compare well, with emerging contenders in AI services, applications, and models. However, Europe has few leading AI companies due to insufficient private investment.[5] Venture capital investment in AI in the US is around seven times larger than in Europe, and in China it is about double.[6]

To stay competitive, Europe must create an environment where AI companies can thrive with access to well-developed capital markets supporting large funding rounds. Europe should become the preferred place for European AI start-ups to raise capital, scale up, and pursue public listing on European stock exchanges.

The lack of an integrated market for capital is a hindrance to adequate investments in AI startups and scale-ups. Also, European investors need to adopt a long-term view of AI ventures and consider higher valuations reflecting the expected returns on AI over a longer horizon. Otherwise, efforts in AI-related education and public support for research and innovation will not be sufficient to keep AI entrepreneurs in Europe.

5. Solving the trilemma of European sovereignty protection, ESG, and technological advancement

The absence of European AI champions makes it challenging for the European public sector to balance AI-powered modernisation with technological and data sovereignty.

Environmental, social, and governance (ESG) considerations add another layer of complexity in the equation. AI's high energy consumption, exemplified by tech companies' nuclear energy pursuits, can be mitigated by efficient data centres and smaller, focused AI models. AI also poses risks to human rights through its possible use for surveillance and democratic threats from fake news and deep fakes.

Using only European AI solutions in the European public sector may not meet specific needs. Thus, a risk-based approach is necessary to balance sovereignty, ESG, and modernisation while benefitting from the new opportunities for AI application.

Conclusion

Europe faces significant challenges and opportunities as it navigates the evolving landscape of AI. A savings and investments union and robust investment frameworks are crucial to foster innovation and retain talent within its borders. By creating an attractive environment for AI startups and scale-ups, Europe can position itself as a global leader in AI technology, balancing the imperatives of sovereignty, ESG considerations, and technological advancement.

Furthermore, as AI usage continues to accelerate, it is imperative for Europe to rethink its crisis management strategies and ensure the resilience of its public and private sectors. The insights gained from high-level debates, such as those organised by the ESM, offer valuable input for policies and practices that align with Europe's goals of social responsibility and financial stability. By addressing these priorities strategically, Europe can harness the transformative power of AI while mitigating the associated risks, paving the way for a sustainable and innovative future.

Competitiveness in the global economy will increasingly depend on the application of frontier technologies such as AI. With the safeguards of the EU AI Act now in place, it is crucial for Europe to embrace AI to boost its productivity and fuel higher growth.

Acknowledgements

The ESM wants to thank the speakers and the audience of the conference.

The authors of this blog would like to thank the ESM Managing Director Pierre Gramegna, and the following colleagues for their input to the conference and to this blog: Kalin Anev Janse, Rolf Strauch, João Gião, Nicola Giammarioli, David Blazquez Vila, George Matlock, and Raquel Calero.

Footnotes

[2] Several regulations already tackle risks to the financial sector and can be applied to the use of AI, e.g. the Digital Operational Resilience Act, the regulation on Markets in Financial Instruments, market abuse regulation.
[3] “Europe’s concentration of dedicated AI practitioners relative to its overall talent pool is 30% higher than in the US. This workforce is also highly educated, with 70% holding a master’s or PhD.” Source: State of European Tech 2023 , by Atletico Ventures.
[4] The European Union and the United Kingdom were granted a cumulative 1,170 AI patents over the 2010–2022 period, out of a global total of 62,260, see AI Index Report 2024 – Artificial Intelligence Index
[5] As of 30 June 2024, the US count 142 AI unicorns (i.e. scaleups that are valuated over USD 1 billion), whereas Europe only counts 31. Source: State of AI Q2’24 Report - CB Insights Research
[6] See VC investments in AI by country data provided by the OECD AI Policy Observatory: Live data from OECD.AI - OECD.AI

About the ESM blog: The blog is a forum for the views of the European Stability Mechanism (ESM) staff and officials on economic, financial and policy issues of the day. The views expressed are those of the author(s) and do not necessarily represent the views of the ESM and its Board of Governors, Board of Directors or the Management Board.

Subscribe to ESM blog posts

On

Download