Safer Internet Day: Learning Systems – The Platform for Artificial Intelligence

February 13, 2023

Artificial intelligence (AI) has made great strides in recent years. AI systems are already in use on the internet today – it is also involved in decision-making in sensitive areas such as medicine or in autonomous driving cars. AI can support us in our everyday lives, but if it is maliciously manipulated, it can cause great damage. Prof. Dr Ahmad-Reza Sadeghi explains the IT security challenges associated with the use of AI and how AI systems and the underlying data can be protected against attacks. He is head of the System Security Lab at Darmstadt University of Technology and a member of the Learning Systems Platform.

Mr Sadeghi, what new challenges do AI systems pose for IT security?

Algorithms or AI-based systems are fragile from a security perspective because they are very data-dependent. They can be manipulated easily and especially covertly. The more advanced the systems become, the more advanced the attacks are. The biggest risk is our application behaviour: If AI systems one day really automate large parts of our everyday life and make decisions for us, our dependence on these systems will also be much greater. So potential attackers can also cause much greater damage. Another challenge is that common IT security systems cannot be easily transferred to AI systems. In addition, you don’t want to limit the performance of the models with security measures.

How can we protect AI systems and their data from attacks?

You have to find new ways to secure AI algorithms. In my research, I have also been heavily involved with applied cryptography, i.e. computing with encrypted data. The purely cryptographic solutions are not yet scalable, especially for huge AI models with billions of parameters in some cases. So algorithmic improvements and hardware-based solutions for AI security are also currently being researched. Another interesting field is the use of AI for safety-critical systems, i.e. algorithms that protect against attacks.

In terms of data protection, distributed machine learning is an important option. Here, each end device accesses the respective current model and trains it locally with its own data set. Possible personal data does not have to be sent via a central server. Among other things, this increases privacy, for example in a medical context. Hospitals do not share medical data with each other, but can still train the same systems with their data via distributed machine learning and thus work together. However, there are also more points of attack when data and AI models are distributed across more systems. Individual computers could be brought under control by software or because people in an institution collaborate with the attackers. If this happens, the overall model can be manipulated.

What has to happen to ensure security in the AI age?

We need to define the concept of security in the AI context more broadly than before. AI decisions have a high reputation and are often seen as neutral and unbiased. However, they often only reflect the data used to train the AI systems – and thus also human behaviour, habits and prejudices. This shows that more attention needs to be paid to social factors in the development of AI systems. The impact of AI systems on our society also needs to be examined more closely. While AI applications for the financial market, in medicine or in the legal field are obviously recognisable as critical applications that need to be comprehensively analysed and reviewed, consequences of other AI applications such as the recommendation algorithms of Facebook, Twitter and Google can easily be overlooked. These are the echo chambers that are changing our societies. I am not worried about terminators, but about the insidious impact of social media on democratic countries and their electoral systems. AI holds many opportunities for business and society. But we will only be able to realise its full potential if we develop and use the technology in a secure, privacy-protecting and ethically responsible way.

Related Articles

Share This