top of page
TechCircle logo black_edited.png

From Detection to Foresight: The Coming Age of Predictive Cyber Intelligence

  • Writer: Manish Yadav
    Manish Yadav
  • 2 days ago
  • 7 min read

Updated: 7 minutes ago


When Algorithms Became the Final Line of Defence


The architecture of the digital age is paradoxical. The very technologies that have brought billions of people together, driven economies, and transformed everyday life have also made it vulnerable to threats of a magnitude and brilliance that no human team, however competent or vast, can be held responsible for in isolation. Opponents develop, perfect and launch hundreds of thousands of new malicious files every day, most of them now written with the same artificial intelligence tools defenders use. This is not a message of what is to come. It is a definition of the present.

However, in the cybersecurity market, the reaction to this fact has been too dramatic. The sellers attach the term AI-powered to goods with the reflexive excitement of a marketing staff examining the issue of relevance. The noise is deafening. The signal is typically thin. Behind the language, however, there is a meaningful and consequential difference between organisations that have bet on AI as an underlying structure and those to which it is an elaborate feature to be attached to something more basic.

Kaspersky is firmly in the first one and has been in it since 2004. Even when AI was still the buzzword of the technology industry, machine learning was the central element of the detection systems that Kaspersky implemented into its structural core. That is a decision that was made in silence and without a lot of hype 20 years ago, and it has defined all that the company currently does.

The context matters. In 2004, cybersecurity was a game of signatures: hackers came up with malicious code, registered it, and implemented patches to the endpoints. It was effective, provided that threats could be handled in quantities. The model failed when the creation of malware started increasing at a pace that a team could not track. It broke.

“The point of inflexion has come to understand that the amount of new malware that is being created exceeds any human ability to handle it”, according to Jaydeep Singh, the General Manager of India at Kaspersky. We have modelled machine learning as part and parcel of our detection architecture, rather than an additional feature. Ever since then, everything we do has been determined by that early decision.

The returns of commitment are now calculable. The number of Kaspersky machine learning patents increased nineteen-fold in 2019-2022, not only the amount of research that was undertaken, but also the originality of solutions generated. By 2024, it made specific improvements in its ML models that have increased its Advanced Persistent Threat detection by 25 per cent. And as a company, Security Bulletin itself informs, by 2025, the AI- based systems will be handling about half a million distinct malicious files daily. That latter is an excellent sitting figure. Five hundred thousand threats every day. It is such a figure that makes purely human defence not only insufficient, but structurally impossible. Architecture Over Advertising When integrating AI, look beyond the marketing hype to understand the underlying mechanisms. In the case of Kaspersky, the architecture would be multi-layered. The former is machine learning under supervision and without supervision to learn malicious behaviour known to prevent recognising strong and subtle variations of the previously known attacks; the latter creates a fine-grained baseline of normal behaviour, and then marks anomalies. This latter method is the one that allows for recognising the zero-day threats: the attacks that have no previous signature, and are observed for the first time.

Deep neural networks above these base layers are trained to detect spam and phishing, and in this case, the code is the language used as a weapon by the adversary. What is more subtle still, they do not examine what a file looks like, but what a file does. How code executes. How network traffic flows. Which system calls are invoked. In a time and age where AI-generated malware can be developed to appear as legitimate software, one will not design such a behavioural focus. It is a strategic necessity.

As plainly as possible, Singh says, “Our reply is to concentrate on behaviour, not appearance. Even the AI- generated threats leave behavioural fingerprints, and our systems are trained on them.” This philosophy is realised on dedicated platforms. The Kaspersky Anti Targeted Attack KATA platform is based on ML that finds slow and multi-stage campaigns of the type that are linked to advanced state actors. Machine Learning for Anomaly Detection (MLAD) brings this to the industrial settings of power grids, factories, and critical infrastructure, where continuous monitoring of cyber-physical telemetry should be performed. Kaspersky also has AI implemented throughout its EDR, XDR, MDR, and SIEM products, automating the process of alert triage such that analysts are only shown what should be their focus, rather than all signals generated by the system.

According to Singh, “To the vendors who have recently integrated AI, it can be seen as a layer on top of a current architecture. AI is the architecture in the case of Kaspersky. It is integrated into endpoint protection, EDR, XDR, MDR and threat intelligence, not as a feature, but as intelligence that renders each of the above functions functional.”

This argument is focused on the Indian market. To coordinate its role in digital consumers processing billions of transactions, onboarding hundreds of millions of new internet users, and operating critical infrastructure at scale, India has to manage the requirements of the Digital Personal Data Protection (DPDP) Act. The new compliance requirements presented to the organisations that process personal data under the Act introduce a trade-off that security experts need to find a balance: implementing AI-based threat detection that is thorough enough to be effective, but that is not based on the training data and making it a liability.

The models created by Kaspersky are conditioned using anonymised behavioural data execution sequences, but not personally identifiable information. And in industries that are regulated, like BFSI and healthcare, where AI- based security choices are subject to audit and legal accountability, the company reveals its findings through a visual dashboard and contextualised output, which can be validated and explained by analysts.

“Trust is never a soft value”, according to Singh, in a market such as India. It is a commercial requirement. The Human Equation and What Comes Next


The question of human relevance is the AI-in-cybersecurity argument that causes more apprehension than any other. Assuming that machines can deal with half a million threats a day, then what is left to do?

The solution, practically, is all that involves judgment.

AI is effective at large-volume and fast tasks: triage of alerts, filtering false positives, and recognising patterns in large datasets. It is these specific operations that have historically engulfed the security teams in noise, eating up time that could be well used in substantive analysis. The ability to investigate a problem in a complex manner, make contextual judgments, be ethically responsible, and the sort of intuitive analysis that a high-stakes,

ambiguous scenario requires, are things that AI cannot replicate.

Singh says that “AI is supposed to augment human judgment but not to substitute it.” These still demand human thought: Complex investigations, analysing the situation, making decisions in uncertain cases, and contextual analysis of the situation. The outcome is the division of labour voluntarily. AI operates the initial level of work processing telemetry at scale that brings to light the signals that are of interest, providing the analyst with some background information, before they dive in. It is then human specialists who introduce what no model can emulate, namely geopolitical interpretation, competing-hypothesis reasoning and the judgment calls that bear downstream legal weight. Singh posits that “the future Security Operations Centre will not be characterised by the level of autonomy, but that the level of human and AI amplification will define the future.”

When considering two developments three to five years into the future, there are two that can be termed as transformative. The former is the Agentic AI systems that can autonomously collect information about the context and can match the signals of various sources and provide actionable intelligence with minimal human intervention. The impact will be an immense reduction of the window between the detection and insight into the space within which the attackers are currently moving with maximum freedom.

The second one is predictive threat intelligence: the type of AI that does not simply respond to the threats that are already going on, but predicts them. According to Singh, “Today's models are mostly responsive to the threats that exist. Tomorrow' s will also grow more analytical in its analysis of the infrastructure of behavioural patterns, the behaviour of the past attackers, so that it can predict what they will do next, not correlation but something more akin to foresight.”

Generative AI makes this image difficult on either side. The enemies are already utilising it to reduce entry barriers to create convincing phishing messages and create polymorphic malware that reforms itself to avoid detection. On the defensive, Kaspersky is also rolling out generative AI to generate both structured threat intelligence summaries and power investigation assistants that assist analysts in traversing complicated intrusions. Importantly, they could democratise high-quality intelligence to make it available to mid-sized organisations, which do not have committed threat intelligence capabilities at the moment, and this void is all too evident in the security environment of India.


Architecture Is Strategy At its most fundamental level, the fact that Kaspersky has been working with the idea of artificial intelligence is a narrative of a bet that was made early and which it believed in. The move in 2004 to incorporate machine learning into the principles of its detection architecture, not as an addition, but as the architecture itself, was a way of saying what type of organisation it would become, not merely at that particular moment but over the decades to come.

The results are now challenging to hide: an increase in telemetry pipeline capacity to five hundred million threats per day, a patent portfolio that has multiplied by nineteen times in three years, a detectable increase in APT detection, and a product offering that infuses intelligence into all security levels of the stack and not onto top of it.

Nonetheless, the more important one is the lesson that Singh puts forward in the most straightforward way “In a world where all the vendors purport to be AI-powered, it does not matter whether AI is included or excluded, but the extent to which it is introduced, how stringent it has been tested and how transparently it can explain the choices that it makes.”

He says that the strategic answer is to build AI into a layer of defence and not a feature. The organisations that treat the issue of AI security as a checkbox will become more exposed.

The line between India and its competitors will only become more pronounced as the digital economy the nation is experiencing keeps expanding, and the opponents that India is drawing become increasingly advanced. The future of cybersecurity will not be determined by the participants who shout about the adoption of AI. That will be determined by those who had the most intuitive grasp of it, constructed it the most prudently, and never confused a marketing statement with a strategic pledge.

 
 

Read the Complete Whitepaper

Read the Complete Whitepaper

bottom of page