ImmuniWeb releases its top 10 predictions on novel or key trends in cybersecurity, compliance and cyber law, cybercrime and cybercrime investigation, and the role of AI in these trends.
After the successful launch of its Global Internet Security Statistics Center in November, ImmuniWeb’s cybersecurity experts and engineers, who currently serve over 1,000 enterprise customers from more than 50 countries, compiled their predictions for 2025 based on the firsthand practical experience over many years.
More software will contain vulnerabilities or even backdoors because of AI assistants
With the rapid proliferation of AI-enabled coding assistants for software developers, as well as the uncontrolled use of AI-powered chatbots by software engineers to debug or fix their source code or Infrastructure-as-a-Code (IaaC), we should expect a snowballing quantity of AI-created vulnerabilities or even backdoors. Modern LLMs (Large Language Models) require huge volumes of data for training, eventually greedily ingesting poor-quality, obsolete, vulnerable or even backdoored code. While skilled use of command prompt engineering and attentive human verification of AI-generated code may substantially reduce those risks, sophisticated data poisoning attacks, combined with lack of time to properly verify output of AI tools, will bolster these risks.
Human error and vulnerable third parties will be among root causes of data breaches
Progressive and oftentimes hasty migration to a multicloud environment, without appropriate training of both the users and IT professionals, will exacerbate the already disastrous state of cloud misconfigurations, impacting most IaaS, PaaS and SaaS cloud service providers. To make things even worse, thoughtless adoption of GenAI technologies – mostly driven by the fear of missing out (FOMO) syndrome – will inevitably provoke massive losses of corporate data that corporations are trying to convert into “digital gold” by training or finetuning LLMs. Worse, emerging GenAI startups will likely prioritize disrupting innovation over robust data security, becoming a low-hanging fruit for pragmatic and well-organized cyber gangs that focus their meticulously planned intrusions on the weakest link.
Cybersecurity professionals with AI skills will be among most valued on the market
Implementation of corporate AI governance has quickly become one of the key priorities in large companies and governmental entities, being driven both by understanding of the emerging AI risks and regulatory requirements. Industry professionals having advanced cybersecurity skills, as well as good understanding of how AI works with its inherent limits and risks, will have a strong competitive advantage on the market when hired to implement, audit and continually improve AI governance programs. Likewise, collaboration between cybersecurity experts, AI engineers and data scientists, and lawyers will become more common to ensure a holistic approach to mitigation of AI risks.
Highly complex and diversified attack surface will negate growing cybersecurity spending
Despite the globally growing spending on cybersecurity and cyber defense, the ultimate effect will likely be of a palliative nature, being unable to address the root causes of steadily growing data breaches. Work from home (WFH) employees, innumerable third and fourth parties with privileged access to confidential data or business-critical systems, ongoing migration to cloud and hybrid infrastructures, automation of container orchestration in production environment, omnipresent mobile and IoT devices, and now-emerging AI technology stack – simply become unmanageable in most companies. Eventually, cybercriminals do not need expensive zero-day exploits or time-consuming advanced persistent threats attacks (APT) anymore: spotting one single misconfigured Internet-exposed system or cloud instance may suffice to compromise the Crown Jewels of many large enterprises and public entities.
Compliance fatigue will be a major trend amid mushrooming data protection laws
While lawmakers from many developed countries seem to enthusiastically compete with each other as to who will enact more laws in data protection and privacy, corporations start to realize that diligent compliance with every single law and regulation becomes excessively burdensome and economically impractical. As a result of the perceptible overregulation regime in cyber space, the compliance fatigue syndrome – when it is easier, faster and less expensive to pay a fine rather than to comply with law – will continually expand its grim realm. Especially large companies will prefer to “pay to play” given that aggressive litigation strategy and smart lawyers may substantially reduce regulatory fines or even avoid any sanctions altogether.
Cybersecurity insurance industry will become more mature and paradoxically inefficient
After over a decade of unenviable experience and acrimonious litigation with clients over eventual coverage of virtually all kinds of data breaches, cybersecurity insurance has become a mature business, leveraging now-available historical data to reliably decide what incidents are insurable and to what extent. Ultimately, many clients of data-driven cyber insurance may either face prohibitively high premiums or simple exclusion of most relevant cybersecurity incidents from their coverage policy. However, when a security risk cannot be mitigated by common security controls – for instance, a legacy IT system with business-critical data that nobody is willing to touch to avoid being a scapegoat in case of a failure – cybersecurity insurance may be the best solution to, at least, reduce financial losses in case of an incident.
Well-thought-out ransomware attacks will dominate the cyber threat landscape
Despite the increasing number of laws and regulations demanding to disclose security incidents or payments of ransom, a non-neglectable number of ransomware victims still understandably prefer to swipe incidents under the carpet to avoid long-lasting reputational damage, loss of revenue and governmental probes. Well-organized cybercrime actors perfectly grasp these subtle nuances when carefully selecting their targets, sometimes selecting the most vulnerable but solvent entities including operators of critical national infrastructure (CNI). Consequentially, ransomware becomes an unbeatably lucrative and safe business: handpicked victims will almost always pay in an untraceable manner and unlikely ever report the intrusion, even when being fully aware of the harsh legal consequences and other risks associated with payment of ransom.
GenAI-powered deception, fraud and disinformation campaigns will be a major threat
Freely available access to powerful LLMs enables beginners and technically unsophisticated computer crime actors to launch scalable fraud campaigns over the Internet, ranging from technically primitive human deception or sextortion with AI-generated deepfakes to more sophisticated social engineering and bypass of biometric authentication mechanisms. In spite of the abundant availability of solutions and security controls designed to prevent these novel vectors of decade-old attacks, most companies are slow in their implementation due to other priorities or necessity to fundamentally change their core business processes. Last, state-backed cyber threat actors will highly likely turbocharge and amplify their disinformation campaigns in social networks, exploiting unparalleled speed and power of LLMs to generate false but credibly looking content coming from millions of bots and fake accounts.
Cyber mercenaries and organized cybercrime will derive little value from GenAI
Contrasted to script kiddies and nefarious perpetrators of state propaganda, professional cyber mercenaries and organized cybercrime groups will derive little to no value from the GenAI hype. Despite the increasing, albeit progressively slowing down, improvement of contemporary LLMs, all publicly available LLM models are still far too basic to perform advanced vulnerability research, to create stable exploits for zero-day security flaws, to develop sophisticated malware or backdoors – tasks that still all require expert knowledge and unique human skills. Paradigmatically, GenAI is flatly useless for a plethora of other mission-critical tasks, such as cryptocurrency laundering, mission-specific acquisition of compromised infrastructure to be exploited as a proxy in cyber-attacks, recruitment and vetting of new members of the gang, or prudent collaboration with friendly gangs.
Investigation and prosecution of cybercrime will become even slower and more difficult
Amid the unfolding geopolitical conflicts and overall instability impacting virtually every single part of the world, cross-border investigations and judicial assistance in cybercrime cases become an onerous and time-consuming task, sometimes verging on impossibility to arrest and prosecute criminals from overseas jurisdictions. A ballooning number of sophisticated cyber gangs relocate to safe-harbor (also known as non-cooperative or non-extradition) countries, where collaboration with local government makes cybercriminals almost untouchable and obviously non-extraditable. Even if 2024 has convincingly demonstrated that a transatlantic collaboration between the EU and US law enforcement agencies (LEAs) and judicial authorities brings fruit, culminating in numerous arrests of cybercrime kingpins and seizure of their assets, the current political uncertainty makes further collaboration prone to fragility.
Use and distribution: you are welcome to utilize the above-mentioned content for non-commercial purposes if you make a clear attribution to ImmuniWeb, with a backlink to this page when practical.
Press-release: https://www.immuniweb.com/blog/cybersecurity-cybercrime-cyber-law-predictions-2025.html
© 2024 ImmuniWeb