Study warns: the use of artificial intelligence and humanoid robots is advancing faster than expected – with far-reaching consequences for data protection, security and governance
Artificial intelligence (AI) is no longer a topic for the future – it will become standard in offices and production facilities in the coming years. A new study by the Bonn Business Academy (BWA) and the Diplomatic Council (DC) shows how profound this change will be. But with the accelerated penetration of the economy and the world of work, the risks are also growing: cyber attacks, data misuse and unclear responsibilities threaten to become the biggest challenges of the coming decade.
AI is becoming part of everyday office life – security must grow with it
By 2027 at the latest, AI will be ubiquitous in offices – comparable to the use of Microsoft Office today. Systems that write texts, prepare decisions or control personnel processes will become routine.
But the more AI is integrated into operational processes, the more sensitive the data being processed becomes. According to the study, the majority of executives surveyed are convinced that the data protection and security guidelines of many companies are currently not prepared for AI applications.
‘We are experiencing a technological acceleration that security architectures are not yet able to cope with,’ warns Harald Müller, managing director of BWA and head of the study. ‘AI is being incorporated everywhere, but governance structures and control mechanisms are lagging behind.’
Production facilities in transition – AI as a potential target
From 2030 onwards, artificial intelligence is also set to play a key role in industrial manufacturing – from quality testing to process control. This means that the number of potential entry points for cyber attacks is growing rapidly.
Networked production systems, adaptive robotics and cloud-based controls make the industrial attack surface larger than ever before. A manipulated algorithm or compromised training model can cause production downtime, data leakage or even physical damage.
Security managers are thus faced with a new task: AI systems must not only be protected, but also verifiable and traceable. Transparent decision-making logic, audit trails and continuous monitoring are becoming central elements of secure AI integration.
Humanoid robots – physical risks from digital intelligence
The next stage of development is particularly disruptive: humanoid robots that act and learn independently thanks to embedded AI. Around a third of the experts surveyed in the study expect such systems to be widely used by 2040 – in manufacturing, logistics or service.
This will merge digital and physical risks: faulty algorithms, manipulated sensors or takeover by malware could trigger direct physical hazards for people and equipment.
Safe operation therefore requires comprehensive safety-by-design strategies – from secure communication protocols and redundant emergency shutdowns to AI-specific safety standards, such as those currently being developed in the EU and ISO.
Governance and regulation as central guidelines
The study clearly shows that technological progress is outpacing political and regulatory control. Many companies have neither clear guidelines on the use of AI nor defined responsibilities in the event that systems malfunction.
In view of the EU AI Regulation (AI Act) and upcoming NIS2 extensions, companies will in future have to document how their AI models make decisions, what data they process and how they minimise security and data protection risks.
‘AI must not become a black box system in operation,’ emphasises Müller. ‘Companies must already create structures to audit, certify and, if necessary, shut down AI applications.’
Responsibility between IT, management and employees
Another finding of the study is that both employers and employees recognise that control over AI systems cannot be solely a technical task. It affects the entire organisation – from the IT department to human resources management.
While automation and robotics make work processes more efficient, new ethical and social questions arise:
- Who is responsible when AI makes wrong decisions?
- How is employee data protected when AI analyses or evaluates it?
- What rights do employees have when they work with or under AI systems?
Companies would be well advised to answer these questions before deploying large-scale AI applications – and to integrate them into operational agreements and IT security strategies.
AI needs security before it becomes part of everyday life
The study by the BWA and the Diplomatic Council makes it clear that artificial intelligence will gain a foothold in the economy and the world of work faster than many expected. This increases the responsibility to consistently consider security and control mechanisms. Anyone introducing AI systems in sensitive areas must treat data protection, cyber security, traceability and governance as equally important factors. This is the only way to prevent the next wave of technological innovation from becoming the biggest security breach in industrial history.
Footnote:
The assessments presented in this article are based on the current study by the Bonn Business Academy (BWA) and the Diplomatic Council (DC) on the impact of artificial intelligence and robotics on the world of work. The editorial evaluation was carried out independently by the editorial team.