A commentary by Wolfgang Kurz, Managing Director indevis
Artificial intelligence (AI) is undoubtedly a groundbreaking invention that even scares many. But fear is rarely a good advisor. It is better to deal with the risks early on – and these are different than may be obvious at first glance.
Fear of external control by something that cannot really be grasped and is superior to human intelligence – that is all too understandable. Will a superintelligence soon usurp world domination and determine life and death? At the current stage of development, that is rather unlikely. Far more realistic are quite different dangers, which, however, are still hardly being considered in the discussion about AI. We are talking about malicious manipulation of AI algorithms and AI learning data.
One thing is certain: AI will be an essential aid in tackling the really big problems facing humanity. Experts assume that feeding the world’s population or decarbonisation, for example, can only be solved with the help of AI. But the technology can also be manipulated – through the data it provides and the way it is trained. It is precisely here that cybercriminals have numerous starting points for influencing the decisions of an AI. This is because AI makes purely algorithmic judgements without ethically reflecting on the resulting consequences.
Step 1: Develop awareness
A manipulated AI algorithm steers a hazardous goods transporter into a school, a deceived air traffic control AI causes planes to crash. Criminal ideas, catastrophic scenarios, but entirely realistic. Unfortunately, as of today, we are largely unprotected against them. A major danger of AI lurks in either ignoring or trivialising the real threat, namely its defencelessness. It is only a matter of time before AI actively influences our lives and until then we have to deal with its weaknesses, especially its manipulability, and develop ways to protect ourselves.
But humans are creatures of habit. When confronted with something new, they find it difficult to cope, and they like to sit out problems or put them off until the next calendar week. That is why we are storing old fuel rods, which will continue to radiate for millions of years, in a discarded salt dome for the time being. Even though this problem of nuclear waste disposal has been known from the beginning.
We should ask ourselves: Will commerce or the good of humanity be the priority in the future? Can I trace training data? Who developed AI and why? Who will continue to develop it? And most importantly, how is it protected and who protects it?
Step 2: Determine regulatory
Weather phenomena, climate change, global nutrition – AI has the potential to make a decisive contribution to solving these challenges. But there is a lack of independent bodies to control it. The market itself is unsuitable for this. Those who strive for the maximum performance of an algorithm will not put it in its place at the same time. Sports car designers don’t think about throttling their vehicles, but about the maximum top speed, even if that results in sacrifices.
And politics? They are overwhelmed by the situation. That’s not meant as an accusation, in a democracy it can take time. Nevertheless, our politicians should wake up now and intervene in cybersecurity matters, especially with regard to AI. Because it’s not five to twelve, but actually well past twelve. A supervisory authority, preferably a governmental one, is urgently needed. Only political decisions can initiate this process. With the so-called AI Act, the EU has already initiated a first draft law for the regulation of artificial intelligence. But this only addresses specific high-risk applications of AI – surveillance with the help of facial recognition, which has long been commonplace in China, will not be allowed in Europe. The AI Act, on the other hand, does not take into account the dangers posed by manipulation.
Conclusion: AI can be manipulated and must be protected
AI will certainly not dominate humanity in the near future, but there is still a threat. The focus on superintelligences should not distract us from something much closer at hand: abuse by hackers. This circumstance should be a central subject of public concern in connection with artificial intelligence.
If AI is soon to become part of our everyday lives, we will eventually have to trust it – and with it a tool that can be used to create alternative realities through manipulation that may be willingly accepted. The way in which the human psyche can serve as an accomplice in this process is impressively demonstrated every day in the social media.
People are reluctant to talk about such concrete risks, preferring to fantasise about world domination scenarios. But even currently available systems are sufficient for attacks on our free democracy. It would be naïve to believe that no one is concerned with their misuse.