Evolving Cyber Threats in a World of AI: What Can We Expect From 2024?
2023 has been another transformative year for the cyber security industry. The advent of AI has brought significant advantages to security teams, but has equally lowered the barriers of entry to cybercrime. High profile supply chain attacks such as that on 3CX have ensured that supply chain security has become a key priority for organisations, whilst global tensions have added a layer of further complexity to the threat landscape.
So where will things go from here? Are we headed towards another year of the same, or are there new threats on the horizon? We spoke to nine security experts to get their insights into what to expect over the year ahead.
Security will remain a top priority
Sadly, there is not likely to be a reduction in the number of security threats in 2024. In fact, Christopher Rogers, senior technology evangelist at Zerto, a Hewlett-Packard Enterprise company argues that “2024 will undoubtedly bring even more complex and nuanced cyberattacks.”
“As security teams continue to build new methods and implement technology to fortify against attackers, the threat actors themselves are fighting back just as hard with inventive ways to circumvent them,” he explains. “Previously, when ransomware attackers gained access to their victim’s data, they would either steal or lock it – now they do both. This double extortion tactic now means that, even if an organisation can recover its data, the attackers leak the information regardless. In conjunction with this, we are also seeing more cybercriminals specifically going after the tools that help businesses recover from attacks – making disaster recovery harder than ever.”
Andy Swift, Cyber Security Assurance Technical Director at Six Degrees, agrees that threat actors’ tactics are evolving. He points to the “big rise in attacks against users aimed at bypassing multi-factor authentication (MFA) requirements by proxying the MFA requests in real time between the user, an attacker-controlled infrastructure, and the end legitimate location.”
He adds: “This obviously requires knowledge of the credentials to then initiate the MFA request (usually extracted via phishing or purchased on the dark web), but once the MFA request has been approved by the legitimate user and proxied onto the legitimate location the attacker can simply intercept and steal the session token that is granted after the successful MFA request. This attack method is usually quite involved and as such is not quite so prominent, although as more and more people are finally implementing MFA, combined with the advent of numerous phishing as a service frameworks surfacing on the dark web that can automate the entire process at the click of a button, the trend is only set to rise in 2024.”
In response to the ever increasing complexity of cyber threats, Richard Gadd, Senior VP of EMEA & India at Commvault, predicts that “in 2024, C-suite members will undoubtedly increase their engagement in cyber preparedness.”
“This year, IDC research revealed that only 33% of senior executives are involved in cyber preparedness initiatives, but 61% of those leaders believe an attack could severely impact their business in the next 12 months.”
“With that in mind,” Gadd adds, “there needs to be a fresh focus on data protection and data security, building a well-grounded structure of cyber resilience. This is even more important when we consider that the Digital Operations Resilience Act (DORA) is coming into effect in January 2025, giving industry leaders only 12 months to ensure they meet the security requirements of this act. This move towards cyber preparedness must come from the top-down as a matter of urgency, with C-suite members showing more dedication to building a more reliable and holistic security posture.”
In particular, “identity security will be a top priority in the security landscape as we move forward into 2024,” according to Gal Helemski, co-founder and CTO at PlainID. “Organisations will be looking to understand, discover and manage their identity security posture (aka: identities and their connections to digital assets), as it becomes the number one risk factor to an organisation’s overall security.”
The advent of AI
This year we’ve seen huge advancements in the world of AI – particularly when it comes to generative AI. Excitingly, this has the potential to bring huge benefits to the security sector.
“In 2024, AI will better inform cybersecurity risk prevention decision-making,” argues John Stringer, Chief Product Officer at Next DLP. “With AI estimated to grow more than 35% annually until 2030, businesses have swiftly adopted the technology to streamline processes across a variety of departments. We already see organisations using AI to identify high-risk data, monitor potential insider threat activity, detect unauthorised usage, and enforce policies for data handling. Over the next year, AI will power data loss prevention (DLP) and Insider Risk Management (IRM) efforts by detecting risky activity and then alerting IT teams who can analyse their movements and respond accordingly, preventing further cybersecurity issues from arising.”
Avkash Kathiriya, Sr. VP – Research and Innovation at Cyware, agrees that AI will bring value to security teams next year: “Moving into 2024, the integration of threat intelligence with technologies such as AI and machine learning is expected to accelerate, fueled by the continual barrage of increasingly sophisticated cyber attacks. This integration will work to enhance threat prediction and response capabilities. The trend of cross-industry collaboration in sharing internal and external threat intelligence will also become more commonplace, underlining its role in building robust and adaptable cybersecurity strategies. It will drive change within the industry and we will see trusted community intelligence become more valuable than commodity intelligence.”
A note of caution
However, whilst AI does have the potential to bring huge benefits, Alex Rice, CTO at HackerOne, urges organisations to proceed with caution. “Over the next year, we’ll see many overly optimistic companies place too much trust in generative AI’s (GenAI) capabilities — but we can’t forget security basics,” he explains. “Nearly half of our ethical hacker community (43%) believes GenAI will cause an increase in vulnerabilities within code for organisations. It’s essential to recognise the indispensable role human oversight plays in GenAI security as this technology evolves.
“The largest threat GenAI poses to organisations is in their own rushed implementation of the technology to keep up with competition. GenAI holds immense potential to supercharge productivity, but if you forget basic security hygiene during implementation, you’re opening yourself up to significant cybersecurity risk.”
Matt Rider, Vice President, Engineering at Exabeam, believes that AI has been overhyped in 2023. “Since the term AI has re-burst onto the scene thanks to generative AI tools like ChatGPT, vendors have been keen to capitalise on the AI hype. As a result, we are currently seeing the term overused, misrepresented, and exaggerated. In 2024, I expect we will only continue to see this grow.”
He cautions that “vendors need to be very careful about how they plan to market their ‘AI’ products. As time goes on, the term will become less mystified and customers will become increasingly savvy about what actually constitutes AI and what to reasonably expect from it. Vendors need to ensure their AI technology passes muster once you scratch the surface. Indeed, whether you’re a software vendor or an end-user, the same caution is needed for both – think carefully about what you hope to achieve through incorporating AI into your product or business – and if you are a vendor, think even more carefully about labelling any product as ‘AI-powered’ or risk a very real backlash from increasingly AI-informed and AI-fatigued customers and prospects.”