AI made in Europe

Eine lächelnde Frau in einem dunkelblauen Blazer und korallfarbenen Hemd steht in einem hellen Konferenzraum, im Hintergrund arbeiten drei junge Erwachsene gemeinsam an einem Tisch mit Laptops.

Europe is facing challenges in the competitive world of artificial intelligence. Professor Gitta Kutyniok, a mathematician and computer scientist, Director of the Konrad Zuse School of Excellence in Reliable AI (relAI), explains why Europe should focus on specialised, efficient and transparent AI – and what role open-source models should play in this context.

Professor Kutyniok, the recent launch of China’s DeepSeek language model came as a wake-up call for the AI sector. It proves that powerful AI models can also be trained with few resources. Which lessons should European AI research learn from this?

Europe should concentrate on efficient, specialised and sustainable AI models. A central focus should be on high-quality training data because this provides the foundation for powerful and transparent AI. Furthermore, open-source approaches will be crucial to ensure technological independence and boost collaboration within Europe. 

Shouldn’t we now be trying to keep pace with the US and China, the top players?

For the moment at least, Europe is unable to keep pace with the biggest models from the US and China. However, in the wake of the DeepSeek launch, we should in any case be thinking quite fundamentally about whether continuing to invest massively in broad-based applications is a good strategy. Or whether it perhaps makes more sense to focus on niche areas. This is the path I envisage for Europe. We should place the emphasis on smart AI research: high-quality training data, efficient models, specialised software and hardware, and open-source approaches. This will allow us to better use our resources and increase our technological independence. And we should further build on another of our advantages: our pool of excellent researchers. The Konrad Zuse Schools of Excellence in Artificial Intelligence are one very good example. Their declared objective is to bring the world’s best AI talents to Germany – and this is already working wonderfully.

The US and China hardly regulate AI at all so as to drive innovations forward more quickly. Has Europe created obstacles for itself with the AI Act?

It is certainly valid to claim that we were perhaps somewhat too hasty to restrict a development before anyone could predict where it is really headed. Regulation should definitely be designed not to impede innovation. At the same time, however, this offers huge opportunities. If it turns out that artificial intelligence can only be deployed really efficiently if it can also be trusted, then this could be a valuable asset for us. And this is precisely where our strength lies. Maybe we can even expand our “made in Germany” label, that has been so successful for such a long time, to read “AI made in Europe”. 

The development of AI in the US and China reflects geopolitical tensions. How should Europe respond?

Europe should come to the fore as a strong third actor and not allow itself to be left behind. What is more, Europe could assume an important role as a mediator between the US, China and other regions when it comes to fostering global dialogue and balanced AI development. Technological sovereignty is a key prerequisite for this, however. As regulation will also become increasingly significant worldwide in the future, Europe could serve as a role model here.

Open-source models offer many advantages, but surely there are risks, too?

Yes, one risk is misuse because open-source models are accessible to anyone. Another challenge is to adapt such models to the EU‘s AI Act, as complying with regulations is complex and requires clearly defined quality and security control mechanisms. Steps must be taken to ensure that open-source models meet European regulations and that European companies do not find themselves at a competitive disadvantage. One possible solution might be to certify open-source AI with a view to creating transparency while at the same time boosting innovative capacity.

Italy has blocked DeepSeek due to data security concerns. Which challenges are emerging for European models?

One problem of non-European AI models is the lack of transparency regarding their training data and how it is used. It is often unclear whether European user data is being transferred abroad, which would violate the GDPR. One solution would be an international data privacy agreement for AI. In addition, Europe needs to develop its own models to avoid dependence on third-party providers. 

DeepSeek can run locally on laptops and therefore offers more data protection. Is that a gamechanger for universities and the education sector?

Absolutely. AI models that run locally allow personalised teaching and lower costs, and can be better adapted to European languages. This is an enormous opportunity for universities and research institutions in particular because open-source models offer transparency and flexibility. One of the reasons, for example, why the European firm Mistral is so successful is that it provides one of the best models currently available to be run locally on users’ computers – and many of these users are customers from the US.

Interview: Juliana Dümler, Klaus Lüber (8 April 2025)


 

Related Topics

DAAD - Deutscher Akademischer Austauschdienst - German Academic Exchange Service