Rise of AI: "70 years we've been working towards this – now AI is finally here"

For the eighth time, Rise of AI took place in Berlin, tackling Quantum Computing, Sustainability, the EU AI Act and whether Germany can be a leader in AI.

In Pocket speichern vorlesen Druckansicht 11 Kommentare lesen

Prof. Dr. Jürgen Schmidhuber (center) at the Rise of AI Conference 2023

(Bild: Thomas Tiefseetaucher)

Lesezeit: 20 Min.
Von
  • Silke Hahn
Inhaltsverzeichnis

(Diesen Artikel gibt es auch auf Deutsch.)

The Rise of AI Conference took place shortly before the vote on the amendments to the drafted European AI Act: 230 guests on site in Berlin and around 1300 remotely connected participants came together at the hybrid event to exchange views on the state of AI research in Germany as well as challenges and opportunities for the local economy, society, technology, environment, art and politics through AI. This time Heise was remotely connected and asked some of the participants how they assess the role of AI in Germany and what moves them in the current AI discussion. Some of the answers are included in this report as a snapshot. Those who missed the event can now find the recording of the talks on YouTube.

Prof. Dr. Jürgen Schmidhuber delivering the Closing Keynote at the Rise of AI Conference 2023

(Bild: Thomas Tiefseetaucher)

The conference once again offered a compact insight into the German AI landscape and featured an impressive line-up of speakers. Jürgen Schmidhuber, one of the founders of modern AI research, was on hand to present the latest AI developments. Feiyu Xu, head of AI research at SAP, showed possible applications in industry, and the entrepreneur and computer linguist Tina Klüwer (expert in the Bundestag's enquiry commission on AI) discussed the development of AI ecosystems. Hans Uszkoreit from the DFKI gave an introduction to foundation models, Sabina Jeschke, physicist and former chief technology officer of Deutsche Bahn, introduced the audience to quantum computing, and Kai Zenner (Digital Policy Advisor to the EU Parliament) from Brussels discussed regulation, to name a few highlights.

Other talks revolved around AI security, AI in a military context, psychological risk and fear management, cybersecurity, sustainability through and by AI, responsible AI, trustworthiness, technical sovereignty, robotics and embodiment research, as well as the influence of AI on creativity and future human-machine relationships, addressed for example by the technology ethicist Joanna Bryson.

Zenner took part in a panel discussion focusing on innovation and the AI Act: "Can Germany be a Technology Leader in AI?" Zenner heads the office of MEP Axel Voss, which is heavily involved in the amendments to the AI Act. On the draft AI Regulation, he took a clear position: "The Commission's starting point is utterly wrong. To create horizontal frameworks and apply them to all sectors and use cases, that simply cannot work."

Panel Discussion: "Can Germany be a Technology Leader in AI?"

(Bild: Screenshot)

Thus, he said, AI has become a buzzword that indiscriminately encompasses all expert systems that have been around for 70 years, modern basic models and deep learning systems. It is unclear how European companies are supposed to succeed in complying with the AI Act - start-ups are left alone with the implementation burden and are overburdened, and the regulator has not even taken founders and small companies into account in the draft, which classifies generative AI in the highest risk category. According to Zenner he and his boss Axel Voss are fighting in Brussels for a balance between risk focus and openness to technology (the day after the Rise of AI, amendments to the draft were adopted in Brussels). Currently, fear and a downright anti-technology mood prevail in the EU Parliament, which involves plenty of emotions and ideologies. Zenner said that the current discussion lacked a view of how AI could improve our lives.

Jörg Bienert, President of the German AI Association, presented a similar approach in the discussion. Information on progress, some of it groundbreaking, is piling up every day. The pace sometimes makes him dizzy, and it is not easy to keep up with the amount of new things. His main concern is technological sovereignty. Germany has good prerequisites for a leading position in AI development: Excellent scientists (this became clear several times during the conference), a lot of data as well as successful start-ups and projects. As examples, he mentioned Stable Diffusion and Aleph Alpha, whose co-founder was on the panel. To keep up, however, Germany needs to think AI bigger. It needs investments in hardware and infrastructure for training large models and a cooperative network in which all participants work together.

Like Zenner he also stressed the need to focus more on the opportunities in areas such as climate protection, the shortage of skilled workers and healthcare. Possible risks of AI could be limited in a thoughtful and flexible way, for example through an obligation to label AI-generated content. Regarding the AI regulation, Bienert expressed scepticism. Based on the current drafts, this type of regulation would "massively hinder" the development and operation of AI applications. The original approach had made sense because it had divided specific AI applications into risk classes. The introduction of the blanket term General Purpose AI (GPAI) under the French Council Presidency makes him unhappy.

Panel Discussion with Jörg Bienert, Jonas Andrulis, Kai Zenner, facilitated by Amira Gutmann-Trieb

(Bild: Thomas Tiefseetaucher)

It is important to realize what this means: it is accompanied by comprehensive compliance, transparency and risk management requirements that make no distinction in the areas of use. "This would be equivalent to imposing the same quality criteria on a screw manufacturer for the potential use of its screws in all areas, regardless of whether they are intended for IKEA shelves or for an aircraft wing," criticizes Bienert. Start-ups and open-source initiatives would be massively disadvantaged by this approach if the AI Act were to come in such a sweeping form. "This would be the end of the AI ecosystem in Europe as we currently know it," he suggested.

Jonas Andrulis from Heidelberg represented the entrepreneurial point of view in the panel. The co-founder and CEO of the AI research company Aleph Alpha criticized the focus on compliance as a hindrance to new ideas. The biggest competitors for start-ups are corporations such as Microsoft, which aim to monopolize AI technology with their capital power. Smaller companies, on the other hand, can only hold their own through partnerships and a strong ecosystem; the strategy must be collaboration. Value creation in Europe is important, and Andrulis emphasized the strength of some companies in Europe that his team has been able to win over as partners (editor's note: possibly an allusion to the future cooperation with SAP that has been rumoured about in the media). The question is whether Europe is now moving fast enough. According to Andrulis, there is consensus that regulation is important, but in the current situation it could also be "dangerous". Europe is "not in pole position for AI", additional obstacles could lead to economic dependencies eventually.

Empfohlener redaktioneller Inhalt

Mit Ihrer Zustimmmung wird hier eine externe YouTube-Playlist (Google Ireland Limited) geladen.

Ich bin damit einverstanden, dass mir externe Inhalte angezeigt werden. Damit können personenbezogene Daten an Drittplattformen (Google Ireland Limited) übermittelt werden. Mehr dazu in unserer Datenschutzerklärung.

According to him, Germany is currently "ahead of the game" in Europe in terms of large-scale language models, but now needs a secure ecosystem. All too many AI experts and managers are currently heavily involved in regulatory issues. This leaves less time for creative work and for "setting up their own companies and teams for the new era". Moralizing and stoking fear do not seem helpful to him in finding a common path. It is true that this was the case with every great innovation, such as electricity, the newspaper or the car. Nevertheless, it makes open and holistic cooperation more difficult. Like Bienert, he emphasizes the high speed of change, which requires "a strong construct for ideas and interests to come together".

Prof. Dr. Hans Uszkoreit, German Research Center for Artificial Intelligence – Presentation "How Universal are Foundation Models?"

(Bild: Mirko Ross)

Other contributions turned to meta-topics. Foundation Models, as they are currently behind ChatGPT, Bing, Bard and other systems of generative AI, are "probability machines, not fact machines", Hans Uszkoreit from the German Research Centre for Artificial Intelligence (DFKI) specified in his keynote ("How Universal are Foundation Models?"). According to Uszkoreit it is not the algorithms that are new but the paradigm of "machine teaching". The data for the training is central because the essential decisions are already made in pre-training.

Uszkoreit visualized Germany's and Europe's contribution to international foundation models. The green dots (European models such as BLOOM) and the only light green dot (Germany, the Aleph Alpha model family) were scarcely scattered amidst numerous models from the USA and China. If you look at the authors of relevant scientific publications, Europeans are strongly represented. It is not too late, but the industry needs support – as there is no equivalent to Google in Europe. Uszkoreit supports, among other things, the LEAM initiative for Large European AI Models and public funding of supercomputing facilities. His talk culminated in a call to action: "Act now!"

Small Language Models without Supercomputers: Leif-Nissen Lundbæk

Dr Leif-Nissen Lundbæk

(Bild: Xayn)

Small language models are an alternative to large language models. At the conference, Leif-Nissen Lundbæk, a computer scientist and mathematician with a doctorate, presented what he described as a particularly energy-saving approach using highly compressed small language models that are capable of semantic search and have contextual understanding at a human-like level (presentation: "Responsible AI on the Road to Carbon Neutrality"). Together with Andreas Grün, the technical director for digital media at ZDF, he had conducted a validation experiment in the broadcaster's media library. The broadcaster wants to improve its media library by personalizing video recommendations in a privacy-friendly and energy-efficient way. According to Lundbæk and Grün, the Small Language Model Xaynia reduced the amount of energy needed to two per cent of what was previously needed. In their own tests, the model turned out to be significantly more energy-efficient than other transformer models used for search (arxiv paper: "Extreme compression of sentence-transformer ranker models").

Problem solvers think ahead

It was Lundbæk's first time at the Rise of AI, and he enjoyed the atmosphere very much: "A lot of highly qualified people who want to solve problems practically and are already thinking three steps ahead into the future. That's precisely what we need right now – especially here in Germany!" Like other conference participants, he emphasized the potential in Europe. Basically, there is no lack of clever minds or ideas. "But at the same time, we have a long way to go because we need more support from politics." Germany, in particular, should focus on AI development. According to him, a virtue could be made of the necessities faced in Europe (such as stricter regulations and less access to Big Data).

He was impressed by the Rise of AI because it is the first AI conference he knows of with a strong focus on AI and sustainability. Lundbæk was positively struck by the numerous presentations and discussion opportunities on the topic, and he is pleased that the topic seems to have "finally arrived in the mainstream". Like other panellists, Lundbæk stresses that it is not helpful when the German and European public laments about AI in a lament tone. Then he takes a turn in the conversation in another direction: Europe is too slow, overregulated and hostile to innovation, they say.

Small Language Models make a virtue of necessity

Instead, it is possible to work constructively and cleverly with the realities. Of course, there are limits in Europe because the regulations mean that less data is freely available than large US companies have at their fingertips. Large amounts of data are fundamental for the creation of large language models. Lundbæk, on the other hand, specializes in small language models, which he says are energy and cost-efficient. The applications he and his team have developed provide personalized semantic search and recommendations for knowledge bases, media, e-commerce and travel platforms, for example.

Dr Leif-Nissen Lundbæk
is co-founder and CEO of the AI company Xayn. Lundbæk studied economics, mathematics and software engineering in Berlin, Heidelberg and Oxford. He received his PhD from Imperial College London, and Forbes magazine included him in its "30 under 30" list.

The lecture by physicist and quantum computing expert Sabina Jeschke was promising: According to the former chief technology officer of Deutsche Bahn, quantum computing is on the verge of general availability for industry. In four chapters she dispelled myths surrounding quantum computing, described its impact, application scenarios and the role of quantum computing for sustainable, energy-saving high-performance computing.

Prof. Dr. Sabina Jeschke, Presentation about Quantum Computing

(Bild: Screenshot)

The audience was astonished to learn that low-temperature systems in industrial dimensions could be expected as early as 2025 and that quantum computing at room temperature should be technically possible in 2028. Companies could prepare for this with algorithms inspired by quantum computing, Jeschke advised, as this technology would be available everywhere in about two years. It will still be expensive at first, she addressed objections – as any newly introduced technology. But prices would become affordable in the foreseeable future. The energy efficiency and acceleration for computing processes will be enormous and will find applications everywhere – real-time calculations for Deutsche Bahn timetables she mentioned as an example. According to Jeschke, cooled quantum computing systems should require one tenth of the energy needed for computing processes today, systems at room temperature only one percent. Jeschke quoted from Nature and scientific publications. More on this in the lecture: "The Future Unlocked: AI and Quantum Computing".

According to Jeschke, all doors are still open to Germany in the relatively young technology of quantum computing – something that cannot be said unreservedly in AI development in many areas: "The USA and China are far ahead of us in this respect," she judged frankly. ChatGPT and its competitors are still at the beginning, but they will radically change the world, she says. It is not just about the automation of certain processes. Rather, generative AI systems will develop into intelligent individual assistants that provide personal support in learning processes and training. All of this is taking place independently of one's own social position – in this respect the AI bots are in a line of development with book printing, the internet and Google. They are consistently continuing the "democratization of knowledge".

Jeschke went along: It is strange that we indignantly cry "censorship" when China blocks Wikipedia – whereas when Italy blocked ChatGPT, it was no better. "Where is the social outcry against this form of guaranteed fundamental rights in Europe?" asked Jeschke. Presently, there are obviously many myths and half-truths circulating. The myth that the "Quantum Age" is still a long way off persists. "That is wrong – and falls into the category of 'excuses for doing nothing'," Jeschke explained. She emphasized the core thesis of her talk: "For such a disruptive technology, there is not a long time for extensive preparation!"

Panel Discussion: "How to Create and Grow AI Ecosystems" with Dr Tina Klüwer (Director of K.I.E.Z. and an expert on the Enquiry Commission Artificial Intelligence), Thomas Neubert (Transatlantic AI Exchange), Alexandra Beckstein (QAI Ventures), host: Dr Johannes Otterbach

(Bild: Thomas Tiefseetaucher)

Exclusively for the audience in Berlin, there were thematic tables at which developers and experts from Fraunhofer and the Bundeswehr Cyber Innovation Hub discussed socially relevant issues with those present, such as the influence of AI systems on the democratic order, disinformation and deepfake, AI in military use (defence), the AI Act and power shifts through generative programming systems trained on open-source code. Two workshops explored topics in greater depth: For example, a lawyer advised on compliance with the upcoming AI regulation, and an IBM employee provided information on technical progress to achieve emissions neutrality.

AI expert Tina Klüwer summed up a fundamental disparity in public perception: "I'm often asked about the risks of AI right now, but almost no one asks about the potentials of using AI." Like Jörg Bienert and other conference participants, she calls for more talk about the opportunities of AI in Europe and to realize that we already use AI applications in everyday life often without any risks arising. In addition to objectifying the discussion that took place at Rise of AI, one talk was specifically dedicated to the topic of coping with fear that controversial AI scenarios trigger: Katharina von Knop, PhD, philosopher, spoke about dealing with fears ("Reduce Fear of AI with Neuropsychology").

In addition to foundation models in international comparison and the technical possibilities of quantum computing, security of AI versus innovation and state security as well as the safeguarding of critical infrastructure through (or despite) AI, cybersecurity (lecture by Mirko Ross: "Hacking AI"), Responsible AI and whether AI should be trusted were discussed (lecture: "No-one should Trust AI"). On the technical side, the audience learned more about energy-efficient computing methods (talks including: "Artificial Intelligence and Energy" and "Quantifying Sustainability of an AI Project"). An artist in attendance offered hope, and Jürgen Schmidhuber closed the circle with a closing keynote.

The conference was divided into meta-topics on the main stage and expert contributions to applied AI on an Applied AI Stage. There, the topics were bias, the training material for models, sustainability, industrial applications, robotics with embodiment research and the hacking of AI models, as well as AI in intelligence services and the military and voice-based sentiment analysis by AI. Organizer Fabian Westerheide opened and closed the conference with an overview of the AI landscape of the past twelve months, in which the latest generation of AI experienced technical breakthroughs and, since ChatGPT, large AI speech models have received widespread attention.

"Hacking AI": Short interview with Mirko Ross (aśvin)

Mirko Ross bei der Rise of AI

(Bild: Thomas Tiefseetaucher)

Heise: Mirko Ross, what did you present, and what was your impression of the conference?

Mirko Ross: "My topic was Hacking AI. It is about the question of how AI systems can be manipulated by attackers, for example by infiltrating "poisoned data" (data poisoning) or installing backdoors in models. With the increasing acceptance and use of AI systems, we will also see an increasing number of attacks. The AI industry is in a frenzy right now, and lack of security will give a nasty hangover in due time. The response to this topic is good: robust and trustworthy AI has arrived as a topic among all players"

AI Act: Is the regulatory wall of protection crushing young companies?

Heise: Where do we stand in AI development in Germany, what do we need most now?

Ross: "The speed of development has increased rapidly, in Germany and Europe we have a large ecosystem of start-ups and industrial AI users. However, the euphoria is clouded by regulatory uncertainty. The draft of the European AI Act is casting its shadow on start-ups and young companies in the AI sector in particular and is clouding the mood. There are estimates that almost half of these young AI companies could be classified as high risk with their AI applications. Europe's regulatory wall of protection through the AI Act then threatens to crush these companies with regulatory compliance burdens. This is why the AI Act is one of the dominant themes at the Rise of AI. What we need in the AI industry is measured regulation that allows the tender economic seedling of the European AI industry to flourish and thrive."

Small is beautiful: AI and sustainability

Heise: What makes you most thoughtful at the moment?

Ross: "Sustainability in AI is a serious issue. The energy consumption by AI systems is huge and will increase hugely. However, we don't need big models with lots of computing power for all areas of AI. This way is literally a harmful waste of resources. Special small AI models for niche applications can be trained and operated in a much more energy-efficient and resource-saving way than the 'Large-Monster Models' (LLM)."

Heise: Looking at the ongoing AI discussions, the media and the public, as well as experts, are vacillating between doomsday and elation. Does that get on your nerves?

Ross: As Stefanie Schramm, Head of Community at Rise of AI, said so well: "What bothers me most is that in 2023 we still have to discuss the meaningfulness of AI."

Mirko Ross
born in 1972, cybersecurity specialist and start-up entrepreneur, ensures with asvin.io that hidden risks become visible. AI is a demanding technology, but it can be hacked just like any other. That is why special attention must be paid to it. As an expert and initiator, Ross builds on the network he has developed over decades in industry, civil society and politics. His goal is to turn cyber security from a problem into an opportunity. It is important to him that "Cybersecurity Made in Europe" becomes a seal of quality for manufacturers and consumers worldwide. Because, according to Ross, trust must be the basis of all action. Insights into Ross' way of thinking and working can be found on his LinkedIn profile.

The Rise of AI conference originated from a private discussion group of ten people who, since 2014, have been thinking intensively about artificial intelligence issues such as singularity, i.e. the point when artificial neural networks will have human-like capabilities and be partially superior to humans. Given the accelerating progress in AI research and mass phenomena such as ChatGPT, such questions are no longer seen as remote. Companies, politicians and the public are concerned about AI developments, for example regarding imminent disruptions and opportunities in the labour market through the use of generative AI systems, which on the one hand increase productivity and on the other automate intellectual work and could thus compete with existing jobs.

Rise of AI 2022 – View into the audience in front of the Meta Stage

(Bild: Thomas Tiefseetaucher)

The EU, for example, announced this week that it would reduce its pool of translators and make greater use of machine translation. There are initial studies on which occupational groups are most exposed, i.e. potentially threatened by the widespread use of generative AI – in addition to translators, mathematicians, programmers and journalists, as well as all professions that collect, condense and summarize knowledge.

On the other hand, it is predicted that similar to earlier technological breakthroughs, new job profiles are likely to emerge and existing ones will not disappear completely, but work processes will change and accelerate. The speakers at this year's Rise of AI took a well-founded stand on such topics in their presentations.

The organizers: Veronika and Fabian Westerheide

(Bild: Thomas Tiefseetaucher)

Since the first edition in 2015, the Rise of AI has grown from an insider tip to an "AI class reunion" and, according to conference guests, is now considered an annual fixture for the German AI industry. This year's edition was already the eighth edition of the event. In parallel, the "Month of M-AI" (May) is taking place across Germany with around 80 events by hundreds of AI players – Veronika Westerheide, who organizes the Rise of AI with her husband Fabian, already came up with the idea in 2019.

Both are constantly discovering new AI networks, companies and programmes, which they give more visibility and networking through their events. Their own ecosystem has also "grown enormously" through the German AI Month framework programme, as Fabian Westerheide told heise Developer. According to participants, the Westerheides put "a lot of heart and soul" into the event, which is why prominent regulars such as digital politician Mario Brandenburg (from the Liberal Party FDP), who was unable to attend this time, did not simply stay away, but apologized personally with an entertaining video message.

"Insights beyond the Hype": Conference Review

Marc A. Linstädter

Marc A. Linstädter, Head of Marketing and Innovation at GEFA Bank, has been attending the Rise of AI regularly for several years. He considers AI to be the key technology of our century - and the conference offers him the opportunity to "meet leading minds in the AI ecosystem, discuss with them and gain insights beyond the superficial hype".

Three themes shaped this year's edition of Rise of AI:

  • Debate on the AI Act
  • AI and Sustainability
  • European AI ecosystem

Not too late despite prophecies of doom

In the discussion about the impact of the AI Act, Linstädter said the concern about overregulation was palpable at the conference - as well as, at the other end of the scale, the question of competitive advantages through trustworthy systems "Made in Europe", from which a balanced picture emerged. The high energy demand of AI systems and ways to reduce energy consumption were discussed, especially the combination of AI and quantum computers could help solve the "optimization problems on the way to a sustainable society" in the future. Linstädter perceived palpable optimism and an awareness of the urgency of a broad-based European AI ecosystem: "At the latest since the ChatGPT caesura, AI has arrived in the mainstream", and the momentum must now be used. Many of the speakers were optimistic that "despite all the prophecies of doom, it is not too late".

AI – The Good News: out of the expert niche

Overall, Linstädter registered relief that, thanks to ChatGPT & Co., the topic of AI is finally getting the attention it deserves after many years in the expert niche - "even if the speeds of realization and concrete action are often different", as he noted with a critical eye regarding politics. Linstädter's basic impression of the Rise of AI was that Germany and Europe are better off in AI development than is often portrayed - but of course not leading. Dr. Björn Bringmann from the consulting firm Deloitte showed this particularly clearly (presentation: "AI in Europe - The Good News"). What Germany needs, in Linstädter's eyes, is a broad understanding that investments in AI must increase significantly in order not to miss the boat.

Impulse: Finding the way between hype and doomsday mood

"The public debate on the AI topic is currently far too strongly characterized by extreme positions, i.e. by hypes or end-time scenarios. For many people, this leads to either inflated expectations, sceptical rejection or fear. It is true that AI will massively change our lives and our society, and this is already happening. However, we must actively shape this process of change. It is neither helpful to play down the potential of AI (keyword: "stochastic parrot") nor to fuel fears of a takeover of our world by machines or to attempt to slow down the further development of this technology.

On the contrary, what we need is a broad social and political debate about the integration of this technology and how to deal with its effects. The topic is often and also understandably overshadowed by other current issues in world affairs, but it deserves attention and constant focus – not least because this technology will potentially help us solve the current biggest threat to our civilization: the climate crisis." – Marc A. Linstädter

The next Rise of AI is scheduled to take place on May 15, 2024: again in Berlin in front of around 200 on-site guests, enhanced by unlimited free remote participation in all lectures. The main question for the ninth meeting is how the landscape of generative AI will change from here on – in particular what impact the EU AI Act, which will have already come into force by then, will have in practice on the companies that have to implement it. This closes the loop, as the topic of AI regulation was widely discussed at the Rise of AI 2023.

The Main Topics of the Rise of AI 2023 are available on YouTube – around 16 hours of presentations and panel discussions encourage in-depth discussion of current questions on AI. Those who were present at the event can take their time to read the content, and those who were perhaps only able to catch up sporadically in the midst of a hectic professional life now have the opportunity to follow the thought-provoking impulses at their own pace. The recording of the lectures can be viewed individually or as a playlist. A selection of photos and further information can be found on the website.

(sih)