Security Forecast 2025: AI agents will revolutionise technical processes and human roles

January 16, 2025

Von Rahul Yadav, Chief Technical Officer, Milestone Systems

As a technology leader who has been working at the intersection of artificial intelligence (AI) and video surveillance for years, I have witnessed numerous transformative changes in our industry. However, none of these changes comes close to what awaits us in 2025. We are on the cusp of fundamental disruptions that will change not only how we view security technologies, but also how we interact with AI across all industries. The marriage of advanced AI capabilities with practical applications is creating unprecedented opportunities for innovation and efficiency.

The age of Agentic AI

The most significant transformation we face is being called the ‘age of Agentic AI’. Unlike traditional AI systems that perform predefined steps, AI agents are autonomous systems that understand context, make decisions and can act on their own. These agents – similar to, but far more advanced than, today’s chatbots – use generative, learning-based approaches instead of static programming. By 2025, we will see these agents in various products and services, from video analytics to automated security response.

Think of AI agents as digital colleagues that can take on complex tasks without constant human guidance. They can respond to prompts or act autonomously when they recognise relevant situations. Most importantly, they learn from their actions and adapt to new scenarios, much like human operators. In security applications, this means systems that can automatically identify potential threats, coordinate responses and even predict incidents before they occur.

The real power of these agents lies in their reasoning and adaptability. Unlike traditional applications that must be explicitly programmed for each scenario, these systems understand context and make nuanced decisions. This capability will transform everything from access control to emergency response, creating smarter and more responsive security environments.

Beyond reasoning: the age of acting AI

We are witnessing a critical shift in the evolution of artificial intelligence – from systems that merely analyse to those that thoughtfully act. While traditional measures such as IQ assess cognitive ability and EQ emotional awareness, a new capability is emerging: the ability to act intelligently and autonomously – AQ (Action Quotient). Consider Tesla’s self-driving cars, which not only analyse road conditions but can also navigate smoothly through complex traffic scenarios in real time.

This transition to action intelligence is particularly relevant for security operations. Traditional surveillance systems alert operators to potential problems, with each response requiring human intervention. In contrast, sophisticated AQ systems can assess situations, initiate appropriate responses and adapt their actions to changing conditions. This capability will revolutionise our approach to security management, making systems more proactive and less reliant on constant human supervision.

The implications go well beyond simple automation. These systems will be able to orchestrate complex responses across multiple sub-systems, from access control to emergency communications, enabling more comprehensive and effective security solutions. The key is that these actions are not just pre-programmed responses, but intelligent decisions based on real-time analysis and learned patterns.

The human factor

Despite these technological advances, human roles are not disappearing – they are evolving. As the CEO of Microsoft aptly noted, ‘It’s not AI that will replace you, but someone using AI.’ Success in 2025 and beyond will depend on how effectively we learn to work with these AI systems to augment our abilities, rather than entirely replace them.

Consider how programming has evolved: today, even young students can create sophisticated programs using AI-assisted tools. This democratisation of technology doesn’t eliminate the need for human expertise; rather, it elevates our role from routine tasks to higher-level decision making and oversight. Security professionals will need to develop new skills that focus on managing and directing AI systems, rather than performing routine control tasks.

The key to success will be learning to work with AI as a partner, not just a tool. Because the better we work together, the smarter and faster we all become. Humans excel at understanding context, making nuanced judgements and dealing with unexpected situations – skills that will become all the more valuable as routine tasks are automated.

The development of AI models

The landscape of AI is becoming increasingly sophisticated and specialised. We are seeing the emergence of three important model types: small language models (SLMs) for specific applications, vision language models (VLMs) designed for video processing, and large multimodal models (LMMs) that can process multiple data types simultaneously.

This development represents a shift from traditional analyses to more comprehensive, learning-based systems. These models not only follow pre-programmed rules, but also learn from each incident and improve their responses over time. This is particularly important for smart city applications where systems need to process and understand multiple data types simultaneously.

At the same time, the underlying computing infrastructure is changing. We are moving away from traditional CPU-based processing towards GPU-focused architectures, which is fundamentally influencing how we approach system design and programming. While large technology companies are investing hundreds of millions in training large base models, security companies can use these foundations to develop specialised applications with more modest hardware investments.

A medium-sized security operation can now build effective AI capabilities with an investment of $200,000 to $300,000 in GPU infrastructure – a fraction of the cost required just a few years ago. This democratisation of AI capabilities means that even smaller security organisations can begin implementing sophisticated AI-driven solutions.

Responsible innovation

By 2025, responsible technology development will become a key competitive advantage. However, this doesn’t mean that innovation will be stifled by over-caution. The key is finding the right balance, taking calculated risks while maintaining ethical standards and user trust.

Just as consumers choose trusted brands for their smartphones and personal devices, businesses will increasingly choose security technology partners based on their track record of responsible innovation and ethical AI use. Think about it: Would you trust a self-driving car made by a company with a dubious reputation? The same principle applies to security technologies – ethics and trust are not just ‘nice-to-haves’, but ‘deal-breakers’. This requires the development of clear frameworks for AI governance while maintaining the flexibility to adapt to new technologies and use cases.

Excellent data creates excellent AI

When looking at emerging technologies, it is important to emphasise one essential aspect: excellent AI requires excellent data. Companies that have invested in data quality are already reaping the benefits of that investment, while those without a solid data infrastructure risk falling behind. In 2025, the focus on data quality will become even more important as synthetic data and accelerated computing push the boundaries of what is possible with AI.

The convergence of these trends in 2025 promises to usher in a new era of AI capabilities, where success depends not only on adopting the latest technologies but also on building a solid foundation of data quality and governance. Better data will always give you a competitive advantage in the market.

The future of video management

The video management landscape is undergoing a major transformation. Traditional video management systems (VMS) are evolving from passive recording and playback tools into intelligent platforms capable of automating complex workflows and security measures. This transformation will fundamentally change the way organisations approach security.

Security centres, which previously required large teams of operators, will become efficient, AI-powered environments where human expertise is focused on high-level decision-making and complex situations. Routine tasks such as incident management and reporting will be largely automated, with AI agents taking over initial assessments and responses.

This development does not mean full automation, but rather a more efficient partnership between human operators and AI systems. The key is to find the right balance, with technology taking over routine tasks while human personnel focus on situations that require judgement, empathy and complex decision-making. This shift requires new approaches to training and workforce development, as security professionals adjust to new roles that emphasise system management and strategic oversight.

The security industry is at a pivotal point. The technologies we develop today will not only transform how we approach security challenges, but also how we think about the relationship between human operators and AI systems. Embracing these changes while maintaining our commitment to responsible innovation will enable us to create security solutions that are more effective, more intelligent, and more responsive to the complex challenges of tomorrow.

Related Articles

Euralarm releases new white paper on fire alarm sensors

Euralarm releases new white paper on fire alarm sensors

Euralarm has published a White Paper on multi-sensor fire detectors and how these devices can help to reduce false alarms. The document is intended for fire safety professionals, building managers, and regulatory authorities. Fire detection is a critical component of...

Face recognition 2.0 from a great distance

Face recognition 2.0 from a great distance

LiDAR system from researchers at Heriot-Watt University impresses with extremely high resolution: Comparison of a LiDAR image with the original (Photo: Aongus McCarthy, hw.ac.uk) In the future, it will be possible to recognise a face from a distance of hundreds of...

‘SUPER’ races safely through treacherous terrain

‘SUPER’ races safely through treacherous terrain

Drones developed by engineers at the University of Hong Kong use LiDAR technology to orient themselves ‘SUPER’ is what roboticists at the University of Hong Kong (https://www.hku.hk/ ) call their new flying robot, which is designed to move through unfamiliar terrain...

Share This