As the sophistication of Artificial Intelligence (AI) tools such as ChatGPT, Copilot, Bard and others continues to grow, they present a greater risk to security defenders—and greater reward to attackers adopting AI-driven attack techniques.
As a security professional, you must defend a diverse ecosystem of multiple Operating Systems (OS) built over time to sustain legacy while adopting new and modern B2B and B2C hyper-scale, hyper-speed, data-rich interfaces. You look for —and rely on— the latest and greatest security products to help you fend off attackers.
However, when pitted against sophisticated AI-driven techniques, existing security products and practices are missing one critical defense element: a technology capable of defeating the next generation of machine-powered, artificial intelligence-enabled adversaries who specialize in machine learning to create new adaptive exploits in breakneck speed and scale.
A clear pattern is starting to emerge with the top-of-mind concerns specific to generative-AI systems and their ability to breach detection and prevention technology.
InfoSec professionals are concerned Generative-AI can be exploited to:
The Defender’s Perspective
Artificial intelligence (AI), with its subsets of Machine Learning (ML) and Deep Learning (DL) are integral to modern endpoint protection platforms (EPP) and endpoint detection and response (EDR) products.
These technologies work by learning from vast amounts of data about known malicious and benign behaviors or code patterns. This learning enables them to create models that can predict and identify previously unseen threats.
Specifically, AI can be used to:
The usage of AI is now becoming a de facto standard to help reduce false positives by identifying an incident's context and understanding the endpoint's behavior to bleach alerts and correct misclassified information retroactively using a wealth of previously sent telemetry data.
The Attacker’s Perspective
As AI evolves and becomes more sophisticated, attackers will find new ways to use these technologies to their benefit and to accelerate the development of threats capable of bypassing AI-based endpoint protection solutions.
The methods attackers can leverage AI to compromise targets include:
We expect attackers will actively use AI to automate vulnerability scanning, generate compelling phishing messages, find weaknesses in AI-based security systems, generate new exploits, and crack passwords. As AI and machine learning evolve, organizations must stay vigilant and keep up with the latest developments in AI-based attacks to protect themselves from these threats.
Organizations using AI-based systems must question the robustness and security of their underlying datasets, training sets, and the machines that implement this learning process, and protect the systems from unauthorized and potentially weaponized malicious code. Weaknesses discovered, or injected into the models of AI-based security solutions can lead to a global bypass of their protection.
Morphisec has previously observed sophisticated attacks by highly skilled and well-resourced threat actors, such as nation-state actors, organized crime groups, or advanced hacking groups. The advances in AI-based technologies can lower entry barriers for the creation of sophisticated threats, by automating the creations of polymorphic and evasive malware.
This isn’t just a concern for the future.
Leveraging AI isn’t required to bypass today’s endpoint security solutions. Tactics and techniques to evade detection by EDRs and EPPs are well documented, specifically in memory manipulations and fileless malware. According to Picus Security, evasive and in-memory techniques comprise of over 30% of the top techniques used in malware seen in the wild.
The other significant concern is the reactive nature of EPPs and EDRs, since their detection is often post-breach, and remediation is not fully automated. According to the 2023 IBM Data Breach report, the average time to detect and contain a breach increased to 322 days. Extensive use of Security AI and automation reduced this time period to 214 days, still long after attackers established persistence and were able to potentially exfiltrate valuable information.
It is time to consider a different paradigm
In a never-ending arms race, attackers will leverage AI to generate threats capable of bypassing AI-based protection solutions. However, for all attacks to succeed, they must compromise a resource on a target system.
If the target resource doesn’t exist or is continually being morphed (moved), the chance of targeting a system is reduced by an order of magnitude.
Consider, for example, a highly trained and extremely intelligent sniper attempting to compromise a target. If the target is hidden, or continually moving, the sniper’s chances of success are reduced, and can even compromise the sniper due to repeated misfiring at incorrect locations.
Leveraging AMTD to Stop Generative AI Attacks
Enter Automated Moving Target Defense (AMTD) systems, designed to prevent sophisticated attacks by morphing-randomizing system resources – moving the target.
Morphisec’s prevention-first security (powered by AMTD) uses a patented zero-trust at-execution technology to proactively block evasive attacks. As an application loads toloads memory space, the technology morphs and conceals process structures, and other system resources, deploying lightweight skeleton traps to deceive attackers. Unable to access original resources, malicious code fails, thereby stopping and logging attacks with full forensic details.
This prevention-first approach stops the attack cold, even if they bypassed existing AI-based endpoint protection tools. As a result, AMTD systems provides an additional layer of defense against sophisticated attacks. Since the attacks are prevented, infosec teams gain crucial time to investigate the threats, while knowing their systems are safe. The deterministic nature of AMTD also means that the solution generates high fidelity alerts which helps prioritize efforts by security teams, reducing alert fatigue.
Protecting over nine million endpoints, across 5,000 organizations, Morphisec’s AMTD technology prevents, on a daily basis, tens of thousands of evasive and in-memory attacks which bypassed existing endpoint security solutions, including zero-days, ransomware, supply chain attacks, and more. The attacks prevented by Morphisec are often unknown and first to be observed in the wild by the company’s Threat Labs team.
Examples of evasive threats recently prevented by Morphisec:
The power of AMTD has proven its efficacy together with multiple waves of endpoint protection solutions, and tactics attackers have developed to bypass them. From signature-based AVs, ML-powered NextGen AVs (NGAV), to current EDRs and XDRs.
AMTD is the natural evolution of endpoint protection, offering true security from AI attacks.
Per Gartner, “A layered defense consisting of AMTD obstacles and deceptions significantly elevates an organization’s security posture. Check out our list of recent Gartner reports on AMTD learn more about AMTD technology and how Morphisec can protect critical systems from AI-driven runtime memory-based attacks.