The Future of Physical Security in 2026

March 1, 2026

Trust, responsible AI, and open platforms as strategic signposts for an industry in transition

When Andrew Elvish, Vice President of Global Marketing at Genetec, presents the latest industry report, he chooses not to use dramatic language, but rather to provide a precise assessment: The Genetec State of the Industry Report 2026 has provided very clear insights into where the industry is heading. Behind this sober assessment lies a comprehensive data base: more than 7,300 respondents from the integrator channel, consulting environment and global end-user base.

The key message: physical security is not facing a one-off innovation, but a structural shift. Technology is becoming more strategic, decisions more long-term – and trust measurable.

Artificial intelligence: priority with a question mark

The dynamic development surrounding AI is particularly striking. Andrew points out that the topic has gained significantly in importance within just one year: it has risen from fifth place last year to second place, overtaking video surveillance as a priority.

This shift is remarkable because video surveillance is traditionally considered a core technology. The fact that AI has overtaken it in terms of prioritisation signals a strategic reorientation. However, Andrew deliberately puts the euphoria into perspective. A crucial question arose repeatedly in the course of the study: What results do security managers want to achieve with AI?

He thus puts the discussion into a technical context: AI is not an end in itself. It is a means to an end. His metaphor illustrates this: It is a hammer – the question is what needs to be built.

For decision-makers, this means that it is not the introduction of AI that determines success, but the clear definition of operational added value – such as speeding up investigations, reducing false alarms or improving decision-making processes.

Responsible AI as an operational necessity

The high prioritisation of AI is accompanied by considerable reservations. More than 70 per cent of respondents see unresolved issues that need to be addressed before full integration can take place. Andrew cites key points as examples: How were the training data sets developed? Who owns the training data? And what protection mechanisms have been implemented?

These questions are not abstract. They concern liability risks, regulatory requirements and the long-term usability of systems. Andrew sums up the concern succinctly: No one wants to find themselves in a situation where AI can no longer be used because the training data set is legally or technically invalid.

AI decisions must remain traceable, especially in safety-critical applications. AI decisions must be explainable and verifiable in order to meet regulatory and legal requirements. This means that responsible AI is not an option, but a prerequisite for marketability.

New dimensions of threat posed by AI

Parallel to the integration of AI, threat scenarios are also changing. Andrew makes it clear that classic attack patterns – such as SQL injection – are no longer the sole reference point. In contrast to classic SQL injection attacks, today’s AI is more about people manipulating AI through clever wording or deception.

This shifts the attack surface: prompt injection, manipulated content or semantic deception can influence operational decisions. Added to this is a structural characteristic of generative models, which Andrew classifies with a quote from his CEO: ‘It has an impressionistic, not a detailed, understanding of things.’

This is a crucial difference when it comes to physical security. Access decisions or alarm processes do not tolerate approximations, but require precision. AI must therefore be controlled, monitored and implemented in an architecturally secure manner.

Trust as a key decision-making criterion

Perhaps the most impressive finding of the study concerns not technology, but partnership. 73 per cent of respondents prioritise trust and reliability with regard to the long-term stability of the provider as the most important selection criterion.

Andrew emphasises that this value is significantly ahead of classic factors such as price, performance or scalability. For the industry, this is a clear signal: security infrastructures are long-term investments. Providers must demonstrate stability, continuity and reliability – not just innovative strength.

Open platforms as a strategic foundation

In this context, Andrew positions openness as the philosophical bedrock of the company. He takes a critical view of closed systems: closed systems are fragile, risky and unnecessarily expose end users to the risk of technological obsolescence.

The technical classification is clear: proprietary ecosystems increase lock-in risks and limit freedom of choice. Open platforms, on the other hand, enable integrators and end users to combine best-of-breed components and remain flexible in the long term.

His reference to the prisoner’s dilemma strategically underscores this argument: cooperation creates the greatest added value in the long term – both for individual players and for the overall system.

Hybrid architectures as a pragmatic answer

Andrew also advocates differentiation in the cloud debate. Instead of either/or, he emphasises that there is a middle ground.

Many organisations want innovation and updatability, but at the same time they want resilience and control. His vivid description sums up the ambivalence: they want to have their cake and eat it too.

Hybrid models combine local stability with cloud-based innovation speed. For regulated markets – especially in Europe – this balance is increasingly becoming a strategic necessity.

Operational efficiency through intelligent search

Technological advances are particularly evident in the intelligent search function within the platform. Andrew impressively describes the difference in investigative work: investigations used to take many hours or even days – today, they take only minutes.

Natural language search, trajectory analysis and AI-supported summaries significantly shorten processes. This makes AI not abstract, but measurably effective.

From device tree to spatial understanding

Another innovative step concerns the structural representation of security environments. ‘You can understand better what happens in the room,’ explains Andrew in connection with new spatial topologies.

Rooms are viewed as functional units in which cameras, sensors and access controls interact logically. This semantic structuring improves context analysis and decision-making processes.

Strategic shift: risk minimisation and competitive advantage

Andrew succinctly sums up the overall development: technology has evolved from pure risk reduction to a strategic advantage.

Security is no longer an isolated protective tool, but an integral part of organisational value creation. IT and physical security are converging, silos are dissolving, and cooperation is becoming a factor for success.

Three key efforts for the future

Three priority areas for action can be derived from Andrew’s analysis:

  • Firstly, the development of responsible, auditable AI with clear training data bases and protection mechanisms.
  • Secondly, the consistent expansion of open, hybrid platform architectures to ensure freedom of choice and resilience.
  • Thirdly, the deep integration of cybersecurity and compliance as structural components of every solution.

Concluding remarks: Strategic clarity and rhetorical precision

What sets Andrew’s presentation apart is the combination of market data, detailed technical knowledge and strategic argumentation. His quotes are not buzzwords, but anchor points of consistent logic. He combines analytical evidence with understandable images without resorting to simplifications. The metaphor of the hammer or the reference to the prisoner’s dilemma are not rhetorical gimmicks, but structuring elements of his argumentation.

His expertise is evident in the fact that he does not view innovation in isolation, but embeds it in regulatory, economic and operational contexts. Especially in an industry that is caught between the pressure to innovate and the responsibility to ensure security, this sober, strategic clarity acts as a stabilising factor.

The future of physical security will therefore not be determined by technology alone, but by the ability to use it responsibly, openly and sustainably in the long term. [DCM]

THE REPORT

Genetec’s State of the Industry Report 2026 is based on more than 7,300 respondents from the integrator channel and global end-user base and shows a significant strategic shift in the physical security industry

Key finding: Artificial intelligence has risen from fifth to second place in investment priorities within a year, overtaking video surveillance. At the same time, it is clear that AI is not an end in itself, but rather a means to achieve concrete results in the security workflow.

Responsible AI and transparency are the focus. Over 70 per cent of respondents see a need for clarification before full implementation – for example, with regard to training data, property rights, protection mechanisms and auditability.

Another key finding concerns the choice of supplier: 73 per cent of end users prioritise trust and long-term stability of the manufacturer. Partnership and reliability are thus becoming more important than short-term promises of innovation.

Strategically, Genetec focuses on openness and cooperation within a broad technology ecosystem. Open platform architectures are designed to avoid lock-in risks and enable maximum freedom of choice. At the same time, cybersecurity remains a core component of any security architecture. AI expands the attack surface, for example through prompt injection or manipulative content. Systems must therefore be secured against new threat vectors and AI components must be integrated in a controlled manner.

Technologically, 2026 will be dominated by operational efficiency. Intelligent search, logo and object recognition, trajectory analysis, AI-supported summaries and space-oriented security logic will reduce investigations from hours or days to minutes.

Overall, the report shows an industry in transition: from isolated risk minimisation to strategically integrated security, from closed systems to open platforms, from technological enthusiasm to responsible implementation.

A clear trend is emerging for 2026: security is not only becoming more intelligent, but also more collaborative, transparent and long-term.

Related Articles

GiantEye: New dimensions in industrial computed tomography

Non-destructive testing (NDT) makes it possible to analyse the interior of components without opening or dismantling them. This method is indispensable, especially for complex and safety-critical systems. However, conventional industrial computed tomography systems...

BSI publishes draft for revised standard on underground hydrants

The British Standards Institution (BSI) has published a draft revision of standard BS 750 and opened it up for public comment. The standard defines technical requirements for underground fire hydrants and their surface frames and covers. With this consultation, BSI is...

Share This