Recent Webinar: Building an Adaptive Cyber Resilient Cloud
arrow-white arrow-white Watch now
close

Outsmarting Generative-AI Attacks: The Power of Automated Moving Target Defense

Posted by Nir Givol on July 27, 2023

As the sophistication of Artificial Intelligence (AI) tools such as ChatGPT, Copilot, Bard and others continues to grow, they present a greater risk to security defenders—and greater reward to attackers adopting AI-driven attack techniques.  

As a security professional, you must defend a diverse ecosystem of multiple Operating Systems (OS) built over time to sustain legacy while adopting new and modern B2B and B2C hyper-scale, hyper-speed, data-rich interfaces. You look for —and rely on— the latest and greatest security products to help you fend off attackers. 

However, when pitted against sophisticated AI-driven techniques, existing security products and practices are missing one critical defense element: a technology capable of defeating the next generation of machine-powered, artificial intelligence-enabled adversaries who specialize in machine learning to create new adaptive exploits in breakneck speed and scale.  

A clear pattern is starting to emerge with the top-of-mind concerns specific to generative-AI systems and their ability to breach detection and prevention technology.  

Text saying "Generative AI" over a computer background.

InfoSec professionals are concerned Generative-AI can be exploited to:  

  • Increased attack surface: Create new attack vectors that are not easily detected by traditional security tools. 
  • Evasion of detection: Generate malicious code that is specifically designed to evade detection by security tools. 
  • Increased sophistication: Create increasingly sophisticated attacks or technique variants that are more difficult to defend against. 
  • Greater speed and scale: Launch attacks at scale and at a speed that is difficult for security teams to keep up with. 

 

The Defender’s Perspective 

Artificial intelligence (AI), with its subsets of Machine Learning (ML) and Deep Learning (DL) are integral to modern endpoint protection platforms (EPP) and endpoint detection and response (EDR) products.  

These technologies work by learning from vast amounts of data about known malicious and benign behaviors or code patterns. This learning enables them to create models that can predict and identify previously unseen threats. 

Specifically, AI can be used to: 

  • Detect anomalies: Identify anomalies in endpoint behavior, such as unusual file access patterns or changes to system settings. These anomalies can be indicative of malicious activity, even if the specific behavior is not known to the security product. 
  • Classify behaviors: Classify endpoint behaviors as malicious or benign. This allows security products to focus their attention on the most likely threats. 
  • Make decisions: Make decisions about whether a particular behavior or code pattern is malicious or benign. This allows security products to take action to mitigate threats, such as blocking access to files or terminating processes. 

The usage of AI is now becoming a de facto standard to help reduce false positives by identifying an incident's context and understanding the endpoint's behavior to bleach alerts and correct misclassified information retroactively using a wealth of previously sent telemetry data. 

 

The Attacker’s Perspective  

As AI evolves and becomes more sophisticated, attackers will find new ways to use these technologies to their benefit and to accelerate the development of threats capable of bypassing AI-based endpoint protection solutions.   

The methods attackers can leverage AI to compromise targets include:  
 

  • Automated vulnerability scanning: Attackers can use AI to scan large bodies of code and systems for vulnerabilities automatically. Using machine learning algorithms, attackers can identify pattern code itself, associated configurations, or even standalone services for potential vulnerabilities and prioritize their attacks accordingly. They sometimes train “at home”, or in isolated staging areas for ways to bypass detection.   
  • Adversarial machine learning: Adversarial machine learning is a technique that uses AI to find weaknesses in other AI systems. By exploiting the vulnerabilities in the algorithms used in AI-based security systems, attackers can bypass these defenses and gain access to sensitive data. 
  • Exploit generation: Attackers can leverage AI to generate new exploits that bypass traditional security measures. By analyzing target systems and identifying vulnerabilities, AI algorithms can generate new code that can exploit these weaknesses and gain access to the system. 
  • Social engineering: Attackers can use AI to generate persuasive phishing emails or social media messages that trick victims into revealing sensitive information or downloading malware. Using natural language processing and other AI techniques, attackers can make these messages highly personalized and more convincing. 
  • Password cracking: Attackers can use AI to crack passwords using brute force techniques. Using machine learning algorithms to learn from previous attempts, attackers can increase their chances of cracking passwords in a shorter time. 

We expect attackers will actively use AI to automate vulnerability scanning, generate compelling phishing messages, find weaknesses in AI-based security systems, generate new exploits, and crack passwords. As AI and machine learning evolve, organizations must stay vigilant and keep up with the latest developments in AI-based attacks to protect themselves from these threats.  

Organizations using AI-based systems must question the robustness and security of their underlying datasets, training sets, and the machines that implement this learning process, and protect the systems from unauthorized and potentially weaponized malicious code. Weaknesses discovered, or injected into the models of AI-based security solutions can lead to a global bypass of their protection.   

Morphisec has previously observed sophisticated attacks by highly skilled and well-resourced threat actors, such as nation-state actors, organized crime groups, or advanced hacking groups. The advances in AI-based technologies can lower entry barriers for the creation of sophisticated threats, by automating the creations of polymorphic and evasive malware.  

This isn’t just a concern for the future.  

Leveraging AI isn’t required to bypass today’s endpoint security solutions. Tactics and techniques to evade detection by EDRs and EPPs are well documented, specifically in memory manipulations and fileless malware. According to Picus Security, evasive and in-memory techniques comprise of over 30% of the top techniques used in malware seen in the wild.  

The other significant concern is the reactive nature of EPPs and EDRs, since their detection is often post-breach, and remediation is not fully automated. According to the 2023 IBM Data Breach report, the average time to detect and contain a breach increased to 322 days. Extensive use of Security AI and automation reduced this time period to 214 days, still long after attackers established persistence and were able to potentially exfiltrate valuable information. 

Binary Code and Computer Parts

It is time to consider a different paradigm  

In a never-ending arms race, attackers will leverage AI to generate threats capable of bypassing AI-based protection solutions.  However, for all attacks to succeed, they must compromise a resource on a target system.   

If the target resource doesn’t exist or is continually being morphed (moved), the chance of targeting a system is reduced by an order of magnitude.   

Consider, for example, a highly trained and extremely intelligent sniper attempting to compromise a target.  If the target is hidden, or continually moving, the sniper’s chances of success are reduced, and can even compromise the sniper due to repeated misfiring at incorrect locations.  

 

Leveraging AMTD to Stop Generative AI Attacks 

Enter Automated Moving Target Defense (AMTD) systems, designed to prevent sophisticated attacks by morphing-randomizing system resources – moving the target.  

Morphisec’s prevention-first security (powered by AMTD) uses a patented zero-trust at-execution technology to proactively block evasive attacks. As an application loads toloads memory space, the technology morphs and conceals process structures, and other system resources, deploying lightweight skeleton traps to deceive attackers. Unable to access original resources, malicious code fails, thereby stopping and logging attacks with full forensic details. 

moving target defense webinar

This prevention-first approach stops the attack cold, even if they bypassed existing AI-based endpoint protection tools. As a result, AMTD systems provides an additional layer of defense against sophisticated attacks. Since the attacks are prevented, infosec teams gain crucial time to investigate the threats, while knowing their systems are safe.  The deterministic nature of AMTD also means that the solution generates high fidelity alerts which helps prioritize efforts by security teams, reducing alert fatigue.  

Protecting over nine million endpoints, across 5,000 organizations, Morphisec’s AMTD technology prevents, on a daily basis, tens of thousands of evasive and in-memory attacks which bypassed existing endpoint security solutions, including zero-days, ransomware, supply chain attacks, and more.  The attacks prevented by Morphisec are often unknown and first to be observed in the wild by the company’s Threat Labs team.  

 

Examples of evasive threats recently prevented by Morphisec: 

  • GuLoader – An advanced variant targeting Legal and Investment firms in the US. 
  • InvalidPrinter – A highly stealthy loader that had zero detection on Virus Total at the time of Morphisec’s disclosure.  
  • SYS01 Stealer – Targeting Government and Critical Infrastructure.  
  • ProxyShellMiner – A new variant targeting ProxyShell vulnerabilities in MS-Exchange.  
  • Babuk Ransomware – A new and unknown variant of the leaked Babuk ransomware prevented by Morphisec, but was observed to take leading EPPs and EDRs over two weeks to adjust their detection logic.  
  • Legacy Systems – Robust protection for Window and Linux legacy operating systems. Morphisec prevented hundreds of malware families targeting Window 7, 8, 2008r2, 2012 and legacy Linux distros. 

The power of AMTD has proven its efficacy together with multiple waves of endpoint protection solutions, and tactics attackers have developed to bypass them. From signature-based AVs, ML-powered NextGen AVs (NGAV), to current EDRs and XDRs.  

 

AMTD is the natural evolution of endpoint protection, offering true security from AI attacks.  

Per Gartner, “A layered defense consisting of AMTD obstacles and deceptions significantly elevates an organization’s security posture. Check out our list of recent Gartner reports on AMTD learn more about AMTD technology and how Morphisec can protect critical systems from AI-driven runtime memory-based attacks. 

 

New call-to-action