By Marco Eggerling, CISO EMEA at Check Point Software Technologies
The year 2023 could go down in history as the year of artificial intelligence (AI) – or at least the year that businesses and consumers alike raved about generative AI tools, like ChatGPT. IT security solution providers are not immune to this enthusiasm. At RSA Conference 2023, one of the leading international trade conferences in IT security, the topic of AI came up in almost every presentation – and for good reason. AI has enormous potential to transform the industry.
Our security researchers have already observed the use of AI by hackers, who use it to create deceptively real phishing emails and accelerate the construction of malware. The good news is that defenders are also using AI and incorporating it into their security solutions, because AI can be used to automatically detect and prevent cyber attacks. For example, it can prevent phishing emails from ever reaching the inbox. It can also reduce the time-consuming false alarms that plague IT teams and tie up manpower that would be better spent elsewhere.
However, with all the talk about artificial intelligence, it can be difficult to see through what is justified euphoria and what is just a marketing gimmick. As with any new technology, there is a learning curve and this can vary from organisation to organisation and user to user. Many companies are only now adding AI features, others have been quicker and are already using it in their daily work. But if you are responsible for protecting your organisation from the ever-growing threat landscape, you can’t avoid thoroughly evaluating new technologies before they are deployed.
So what should CISOs look for when considering whether to include AI in your IT strategy? I recommend approaching AI like a candidate for a job. You need to assess its effectiveness, usability and trustworthiness. The following three questions will help guide you:
1. how is AI being used to improve IT defences?
One of the advantages of AI is its creativity and its ability to make previously unknown – but meaningful – decisions. In 2016, Google DeepMind’s AI AlphaGo beat reigning Go world champion Lee Sedol. Go is an ancient and extremely complex strategy game from Asia. During the game, AlphaGo made a move that puzzled Go experts. They thought the move was a strange mistake. But this move 37, as it subsequently became known, was actually the turning point of the duel, as Sedol could not counter it. It was a move that a human being might never have thought of.
A security solution must therefore use AI in such a way that it prevents threats that other providers cannot detect.
2 What can the AI solution really do?
Given the current popularity of AI, many companies are rushing to add AI capabilities to their products, or so they call them. But in the current economic climate, CISOs need to make their operations more efficient and justify their budgets more soundly than ever. There is no reason to pay for AI capabilities that offer no value. A third party validation of the capabilities of the purported AI solution will show whether it is actually profitable or just hot air.
3 Can AI technologies be relied upon?
AI models are only as good as the quality and quantity of the data they are trained with. According to Stanford professor James Zou, one of the best ways to improve the trustworthiness of algorithms is to improve the data with which the algorithm is trained. A good AI solution provides real-time threat updates and already has a large customer base. In fact, the more customers, the more training data is available to the AI.
With the increasing speed and sophistication of cyber attacks, CISOs need every advantage they can get to protect sensitive corporate data and the workforce. AI can provide a huge advantage as long as trusted solutions are deployed that don’t blindly follow the buzz and turn out to be freeloaders, but provide real value and benefit.