Artificial intelligence is no longer a topic of the future in the Security Operations Centre (SOC).
While early automation solutions primarily took over repetitive tasks, so-called multi-agent systems (MAS) mark a new stage in evolution. Theus Hossmann, Chief Technology Officer at Zurich-based MXDR provider Ontinue, explains how these systems actually work in a recent assessment – providing a rare insight into the operational logic of AI-supported incident response.
From tool to digital analyst team
Multi-agent systems consist of highly specialised AI agents that work together in a division of labour. Unlike classic automation workflows, which respond based on rules, MAS independently break down complex security incidents into structured sub-processes – similar to a human SOC team with clearly distributed roles.
According to Hossmann, such systems now take on Tier 2 and Tier 3 investigations, generate analysis reports and formulate recommendations for action. While human analysts often need 30 minutes to several hours to deal with complex incidents, AI agents deliver initial reliable results within minutes. The operational leverage lies less in speed alone and more in the structured reproduction of proven analysis processes.
Hypothesis-driven instead of reactive
The investigation process begins with hypothesis formation. Based on initial alarms, affected entities and contextual information, the agent formulates an initial assumption about the possible incident. This approach mirrors the working methods of experienced analysts: they too do not work purely on the basis of data, but develop assumptions that they systematically test.
Based on this, the system creates an investigation plan. It validates or rejects hypotheses, adds alternative scenarios and dynamically adapts the plan to new findings. The ability to take context into account – for example, previous incidents or typical attack patterns in the respective IT environment – is crucial here.
Empirical knowledge as digital memory
A central element of modern MAS is ‘memory’. The systems analyse previous processing of comparable cases and draw on documented decision-making processes of human analysts. This reproduces not only technical indicators, but also proven test steps and interpretations.
This approach reduces inconsistent assessments and increases the standardisation of complex analyses. At the same time, transparency remains a key criterion: the systems document in a traceable manner which data sources, queries or API calls were used to obtain evidence.
Continuous reflection and adaptation
During the investigation, the agents continuously reflect on their interim results. New evidence leads to the refinement or reweighting of hypotheses. Advanced systems such as Ontine’s ‘Autonomous Investigator’ also integrate explicit feedback from analysts and implicit usage signals to adapt their decision-making logic in real time.
This creates a learning system that adapts to individual IT environments – a crucial factor in heterogeneous corporate landscapes with widely varying risk profiles.
Automation with clear limits
According to Ontinue, up to 97 per cent of all incidents were resolved without human intervention last year. Nevertheless, Hossmann warns against overestimating the technology. ‘Even though AI provides massive relief, there are still cases where human analysts are indispensable,’ he emphasises. Complex strategic assessments, company-specific considerations or communicative decisions cannot be fully automated.
The real strength of multi-agent systems therefore lies less in completely replacing experts than in scaling scarce resources. At a time when security teams are confronted with an ever-growing number of alerts, data sources and threat scenarios, AI achieves one thing above all else: operational breathing space.
Conclusion
Multi-agent systems are fundamentally changing the way modern SOCs work. They operate in a hypothesis-driven, context-sensitive and adaptive manner, thereby technically replicating key elements of human analysis processes.
As Theus Hossmann explains, AI in cybersecurity is no longer an optional add-on, but a structural response to overload and complexity. In the future, it will be crucial to sensibly integrate automation and human expertise – not as competitors, but as an integrated defence model against an increasingly dynamic threat landscape.

