"Regulation is not the enemy of innovation"

Round table discussion moderated by Dieter Aigner, managing director at Raiffeisen KAG with experts

  • Günther Schmitt, Fund manager and head of Developed Markets Equities at Raiffeisen KAG, Vienna

  • Sandra Wachter, Professor of Technology and Regulation, Oxford University

  • Michael Wiesmüller, Head of the Department for Digital and Key Technologies for Industrial Innovation, Federal Ministry of Climate Action, Environment, Energy, Mobility, Innovation and Technology, Vienna

  • Bernd Zimmermann, Go-To-Market Lead for Modern Work & Surface for Austria, Microsoft, Vienna

Dieter Aigner: After tough negotiations, a new law on artificial intelligence, the AI Act, was finally agreed upon at the EU level last December. Ms Wachter, you were actively involved in this legislation. How satisfied are you with the outcome? Could you explain briefly what the Act basically achieves?

Sandra Wachter: I am really happy that we have this law at all. Even though there is some justified criticism, the AI Act in its current form is a thousand times better than having no law. It is very, very important that we now have regulation that we can work with. Another clearly positive point is that the law not only regulates predictive artificial intelligence, i.e. the use of machine learning to recognise patterns in past events and make predictions about future events, but also generative AI, which can learn human speech, art, and other complex topics, in order to be able to accomplish new tasks.

What would you criticise?

Sandra Wachter: My critique applies mainly to the design. With generative AI, a two-level approach was defined, and developers of so-called generative AI models have to stick to this. If the model has systemic risks, then the requirements are even stricter. Fundamentally speaking, there is no problem with this two-level approach. The problem is just classification. Because the stricter requirements only apply when you’re at a high FLOPS level. FLOPS are floating-point operations per second, a measurement of the performance of computers or processors. This involves how much energy, how much resources are used during training, and thus it only applies to very high performance models, probably just to GPT 4 or maybe Gemini. This means that the stricter rules only apply for these systems. From my perspective, this is not a suitable metric for measuring risk, because one is actually measuring more the environmental load. The things that we need to worry about – such as bias, discrimination, misinformation, lack of transparency, and data abuse – cannot be pinned down this way and also occur in models with lower performance levels. However, it is possible that this design can be corrected with a delegated act. Turning to predictive AI, I would have supported a complete ban on certain applications in emotion recognition and face recognition (for example in the criminal justice system) and predictive policing.

The EU is ahead of the pack with this AI regulation. How important is that?

Sandra Wachter: Very important. It’s really great that we were able put results on the table first! Now we are turning to the global level.

You don’t see any disadvantages for Europe as a business location?

Sandra Wachter: If one perceives regulation as an enemy of innovation, then yes, you can see it that way. But if you look at the AI Act, you see the opposite of this. The regulation is there to protect fundamental and human rights, to make AI transparent, to prevent sexism and racism as much as possible, and to bolster cybersecurity.

Dr. Sandra Wachter, Professor of Technology and Regulation, Universität Oxford
Sandra Wachter

I don’t think we end up in a worse position because of this. And regardless of these aspects, I think it is important to keep in mind where the power is. The power is not where the development takes place, the power is where the purchasing takes place. Europe is the largest market in the world. It’s a good thing that we are making progress here and setting the tone.

Mr Wiesmüller, what do you think? Is regulation the enemy of innovation?

Michael Wiesmüller: : I can’t say often enough: We simply have to stop creating an opposition between innovation and regulation. It’s extremely detrimental to the conversation. Innovation and regulation have to be seen in relation to each other. Bearing this in mind, for our team at the Ministry, Chapter 5 in particular is very important, which is called “Measures in Support of Innovation”, i.e. measures that allow regulation to be conducive to innovation. This Chapter is an attempt to do away with this dangerous juxtaposition of innovation and regulation. We can create processes in which innovators learn something from the regulators, and vice-versa. We can design processes, in order to regulate better and to create innovation processes that better meet human needs. That was one of the most important chapters for us and in general this section came out very good. Of course, you can always do it better, and this is not the bible for the next 100 years. Maybe the Act will have to be amended, but it is very important as a starting point and adequately good in my opinion.

Does technology need regulation in general?

Michael Wiesmüller: I think what we are seeing with AI regulation is a principle that has always been around, in the sense that not everything that is technically possible is economically a good idea or socially correct. Technological developments must be viewed through a critical lens. The categories that we are discussing here, such as discrimination and impacts on democracy, sound abstract.

Back in the 1950s, we had a wonderful material here in Europe, a material with fantastic properties. It was lighter and easier to produce. It could be widely used and was: in air filters, toothbrushes, in buildings, in automobiles. It was called asbestos. It took us a long time to see what kind of toxicological effects this material had on human health. It wasn’t until the 1980s and 1990s that we then started to regulate its use.

Now, of course, I don’t want to draw a direct parallel between AI and asbestos, but we have to understand that technologies can have toxic effect on us, on our children, our society, and our democracies, and that we have the ability to actively shape technologies. I also agree with Ms Wachter that imperfect regulation is much better than no regulation at all.

Mag. Michael Wiesmüller, Leiter der Abteilung Digitale und Schlüsseltechnologien für industrielle Innovation, Bundesministerium für Klimaschutz, Umwelt, Energie und Mobilität, Innovation und Technologie, Wien
Michael Wiesmüller

The economic power that Europe has can be harnessed as regulatory power, and in turn we also need this power at the geopolitical level, when it comes to fashioning global governance of AI.

Microsoft is one of the biggest users of artificial intelligence and started with this well before a broad public discussion of the topic even emerged. It would probably be too much to list off all the various areas in which AI is deployed, but could you just give us a few good examples of how you are harnessing artificial intelligence?

Bernd Zimmermann: Responsibility and trust are our top priorities. Proper framework conditions are necessary for trust to develop. As for examples, there are some fantastic applications in the field of medicine. I myself am always surprised by the new opportunities that AI opens up for us every time. For instance, medical practitioners can exchange diagnoses via Teams Meetings and access expert knowledge in real time. It’s possible to feed X-ray images into models and get diagnoses much faster and easier. Additionally, AI can be used to classify tumours as benign or malignant much faster and easier as well. Another example is education. Although the Ministry of Education wants to proceed slowly with the use of AI, students are already working with it to some degree. And as a father of two children, one of whom has dyslexia, I know that AI can make great contributions in education. For children who are learning to read and write, AI is able to analyse the weaknesses much faster, for example a child’s pronunciation when reading, mixing up syllables, and the like. Using this quick input, teachers can focus much more quickly on these individual shortcomings and use their time in a more targeted manner. AI has the potential to radically change education, both at the basic and advanced levels. And last but not least, artificial intelligence can also be used for programming in the field of IT, where there’s currently a huge shortage of skilled workers.

Dipl.-Ing. (FH) Bernd Zimmermann, Go-To-Market Lead für Modern Work & Surface für Österreich
Bernd Zimmermann

In the automotive industry, for instance, there are several steps that AI can take on, and the remaining 20–30% can then be finalised by people. Provided that there is a suitable legal framework in place and there’s trust in the technology.

Is it even possible to protect against abuse, discrimination, and crime?

Sandra Wachter: If you have a system that is based on historical data, then it is technically impossible for it to have no bias. A dog that’s been biting the mail carrier for years is not going to stop doing that because it realises itself that that’s not OK. And AI is also not able to think critically about whether there was maybe a mistake in the past. It executes what it has been taught.When you’re talking about high-risk areas, such as the job market, education, extending loans, criminal justice, etc., where existential decisions are made for people, then it is important to assume that there is this bias and to do something to counter it, for example by adjusting the algorithm so that the discrimination does not occur anymore. By not ignoring this aspect, it is possible to transform a mistake into an opportunity and to make much better, fairer, and more transparent decisions going forward, compared to what was possible in the past.

Bernd Zimmermann: It was a milestone for us that we quickly found a legal framework for artificial intelligence. There’s still work to be done in the field of social media, and we are now starting to see what effects it is having on society. It’s important to be aware that these kinds of technologies can be rolled out very quickly. And with this in mind, it’s also important that companies’ management teams define a clear strategy for how AI should be handled and what responsibility one has for society. Furthermore, it is necessary to define how AI can be used sensibly and where it should be consciously avoided.

What role does AI play for investors?

Günther Schmitt: AI has advanced to become a key topic in the investment world. If one takes a look at prices on the stock markets, it’s clear that the companies that have recently done well are those that are strongly active in AI and are investing a lot in it. Right now, the reporting season for US companies is under way and just recently reports were released by the three most important companies: Meta, Amazon, and Apple. All of them talked about extremely high growth rates and also said that they were investing a lot of money in AI. However, these companies are also struggling with the misuse of data. And thus, from a sustainability point of view, it is problematic to invest in these companies.

Günther Schmitt, Raiffeisen KAG
Günther Schmitt

We have standards in this regard that we maintain and accordingly we are not interested in investing in some of the companies. That said, we are in contact with these companies and, together with other European investors, we are trying to get them to resolve these problems.

And aside from the companies you mentioned?

Günther Schmitt: We see lots of opportunities for using AI and are fundamentally very positive about AI. Mr Zimmermann already mentioned several aspects. Medicine is such an example. If you look at the pharmaceuticals industry, AI can make the development of drugs much cheaper and faster, whereas it took many years and decades in the past. Along with what has already been said, one of the criticisms of AI is the high energy consumption. But I think it will be possible to resolve this with the use of renewable energy in the future. As for the question about jobs, I don’t take such a negative view of this. While some jobs will be lost, other occupations will emerge in turn. The biggest problem really is the misuse of data. And we do need to have a much more intensive discussion with companies on this subject. And the more one does that, the harder they will work to resolve this problem.

What are your thoughts on this, Mr Wiesmüller? Will AI make jobs disappear?

Michael Wiesmüller: This is an incredibly exciting question that we have been interested in for the last ten years now. Back in the day, the topic was automation and robotics. I think one has to make the distinction between jobs and tasks. There is a big difference. Certain sets of tasks will disappear as a result of AI. However, many occupations consist of a bundle of different tasks. Some of these will be taken over by AI, while others may undergo significant changes, and new ones may appear. Speed will be a factor. Normally, throughout the entire history of the 20th century, the labour markets have been able to absorb the waves of automation. While some career groups have grown smaller, other new ones have emerged again and again. The main argument of some people, let’s call them ‘techno-pessimists’, is based on the idea that this development is proceeding so quickly that the labour market will not be able to absorb it. In my view, tasks that can be automated sensibly, should be automated. It does not appear to make much sense to me to forbid AI systems from certain areas, in order to preserve jobs. There have to be better reasons. I don’t think that AI will be a machine that results in the loss of a lot of jobs. It will disrupt the labour market, restructure it, and require new profiles. But I don’t think it will lead to masses of unemployed people. (See: Brave new work)

Can AI actually replace people? There’s also the question of soft skills, such as empathy, etc.

Bernd Zimmermann: Artificial intelligence is nowhere near being able to replace all of the abilities and skills that real human beings have. AI is neither empathetic nor creative. Nor can it build networks, and skills of this kind are going to be increasingly important in the future. However, there are many areas where automation is simply the better solution and where AI will be able to free up capacities for creative, conceptual work or innovation. So, I see it in a very positive light. That said, education will always be important at the workplace in general. It’s crucial for better opportunities on the labour market, more demanding jobs, better pay, and much more.

Sandra Wachter: In particular with regard to the labour market, regulation will be very important. For me personally, I’d like to see work that I wouldn’t want to do myself simply be automated. But will I get the same salary then? Is my job and my income secure in this case? Because AI is generally used to reduce costs. Companies tend to specialise in cutting workplaces, not adding them, and this has been witnessed in the tech sector in particular. IBM, Amazon, Meta, and X (formerly Twitter) have slashed huge numbers of jobs to cut costs. And then there’s another question: Are the new jobs that will be created in the future really good ones, in the sense of being well-paid and good quality? Or do I have to sit around all day looking over the shoulder of an AI system to see if the algorithm is making a mistake? In this regard, innovation is really progressing more quickly than new jobs can develop. Every technology has rendered some jobs obsolete. However, right now jobs are simultaneously being replaced in a number of different areas, such as medicine, justice, journalism, and agriculture. That’s something we’ve never seen before. And the jobs that are being created, like influencers, prompt engineers, and platform workers, will probably have a shorter lifespan than traditional jobs, or at least offer a significantly lower level of labour protection compared to traditional employees. One has to think about the worst-case scenario right now already, so that we end up with jobs that are desirable, secure, and well-paid.

For decades now, major investment banks have been trying to implement quant models for actively managed funds, in order to achieve better performance. Have they been successful?

Günther Schmitt: No, it’s been seen that this doesn’t work. At the moment, some companies are making an effort again to create AI funds. But so far, these have also not been really successful, even though these models and supercomputers can process the data in the background a million times faster than us people can. Clearly, the mechanism of how stock exchanges function has not yet been figured out.

Speed is a big watchword on the stock markets. Information, including misinformation, can trigger massive losses in a matter of seconds...

Günther Schmitt: Yes, it’s 100 per cent true that there can be huge problems in this regard. We’re already occasionally seeing a “flash crash” here and there, cases in which AI triggers stop loss orders, leading to massive sales and within second billions of worth of assets are wiped out. The companies impacted sometimes suffer price declines of 10 to 20 per cent. And there are a number of legal questions that still need to be clarified. There are some efforts to address this via regulation, but so far no adequate answers have been found.

This year features important elections, in particular in the USA. In the future, will AI determine who is going to be the President of the United States of America?

Bernd Zimmermann: I think that all of us are aware of this issue and still feel the effects from the last election. I can state confidently that the major technology companies are well prepared to prevent precisely this kind of outcome. But can one rule it out completely? No. But I think we have come one step closer to being able to.

This content is only intended for institutional investors.

More