AI sets the pace: The most important IT trends for 2026

December 19, 2025

The dynamic nature of the IT world is creating an environment in which opportunities and risks are more closely intertwined than ever before. Against this backdrop, new opportunities are emerging – from significantly increased productivity to completely reimagined business models – but also serious challenges for security, governance and ethics. In 2026, the use of artificial intelligence will reach a new stage of maturity, in which AI will no longer be viewed as a separate technology, but as a fundamental layer of modern IT architectures. Eduardo Crespo, Vice President EMEA at PagerDuty, sees this development as the starting point for the most important predictions that will shape the IT year 2026:

Europe will once again become the engine of growth

Even if the growth forecasts for 2026 are moderate, Europe has every chance of becoming the engine of growth again and countering the dominance of the USA. If inflation continues to stabilise and, as predicted, levels off at around 2% over the next two years, European companies will implement their previously postponed investment decisions in new technologies such as GenAI and AI agents. This will give the digital transformation in Europe even more momentum. Europe can provide fresh impetus for growth in 2026 and beyond and assume a leading economic role if it succeeds in creating the political framework for greater competitiveness, increasing investment and minimising external risks such as trade conflicts.

Companies are revising their HR strategies

The use of AI is fundamentally changing the requirements for employees. The technology is turning the world of work as we have known it for decades upside down at a breathtaking pace. But in 2026, it will become apparent that some companies acted too hastily and prematurely laid off valuable employees. They will bring back some of their employees and possibly hire new candidates with specific AI skills in order to realise and expand their ambitious goals for implementing AI-supported systems. Continuous training and development of employees will play a key role in this process.

DORA violations – from monitoring to enforcement

The Digital Operational Resilience Act (DORA), which has been in force throughout the EU since January 2025, sets out clear rules for financial sector players on the management of IT risks – including those posed by third-party providers – the reporting of incidents, the performance of security tests and the exchange of information on protection against cyber risks.

DORA ensures that violations of digital operational resilience are consistently punished in order to guarantee the stability of the financial sector and avoid damage to reputation. The regulation provides for a strict system of sanctions and heavy fines. BaFin is responsible for compliance and enforcement in Germany. German and European companies must ensure that their operational resilience not only meets the standard, but is also responsive. Since DORA came into force, the focus has initially been on monitoring, auditing and issuing warnings. However, this is likely to change in the coming year: in the event of repeat offences and serious violations, the authorities will increasingly move towards consistently applying the sanctions framework and imposing fines, among other things, in 2026.

Ethics and regulation shape the next phase of AI adoption

The more connected companies become and the more deeply AI-supported systems are integrated into everyday work, the clearer the need to promote further development from an ethical perspective. In 2026, European governments and companies will increasingly focus on the ethical use of AI in order to secure the trust of their customers. Even though AI models are increasingly acting autonomously, it must always be clear which human authority is responsible for decisions and how misconduct or distortions can be corrected.

Companies will ensure that employees have sufficient AI expertise to manage risks responsibly: Transparency, fairness, data protection and protection against discrimination must be guaranteed, as AI systems can reinforce existing biases or process highly sensitive data. Regulations such as the EU AI Act set minimum standards for this, but the rules will continue to evolve to meet the ever-emerging challenges.

The path to a fully AI-driven enterprise

A forecast beyond 2026 refers to the question of how far the use of AI tools will go in enterprises. AI agents, which are working ever more closely with humans and improving daily as a result, will orchestrate all central operational business processes in the near future – from product development and robot control to customer interactions. However, AI will play an increasingly important role not only operationally, but also strategically, for example in the development of innovations or the design of the customer experience. Nevertheless, trust in AI will remain essential for successful, AI-driven companies in the years to come.

Eduardo Crespo is VP EMEA at PagerDuty. He has more than 20 years of experience in investment banking, strategy consulting and innovative cloud and software solutions. Most recently, Eduardo Crespo was part of the management team at Medallia, a leading SaaS company in the field of experience management, and accompanied the company from Series C financing to its IPO on the NYSE.

Related Articles

What does NIS-2 really mean for your physical access control?

Compliance officers are familiar with the scenario: you invest millions in network security, implement sophisticated endpoint protection and set up a zero-trust architecture – but all of that is useless if someone can walk through an unlocked door into the server...

Share This