Cybersecurity Tech Investment Planning: Use annual loss expectancy to build a business case
arrow-white arrow-white Download now
close

Machine Learning Can’t Protect You From Fileless Attacks | NGAV

Posted by Michael Gorelik on May 13, 2020
Find me on:

Machine learning can't always protect you from fileless attacks

The rise of fileless attacks in the past 10 years has stymied even the best antivirus software. Traditional AV is designed to detect known signatures of known malware and prevent it from executing. Fileless attacks lack a signature, which allows it to handily bypass traditional antivirus products. 

Moreover, fileless attacks are growing and becoming more of an issue. According to recent endpoint security research from Ponemon Institute, fileless attacks are expected to constitute 41 percent of cyberattacks in the next 12 months. This is up from 35 percent this year, and has shown a consistent upward trajectory since 2017.

Next-generation antivirus, or NGAV, software is meant to halt fileless attacks and other evasive malware through heuristics and machine learning algorithms. Ignoring for a minute that calling anything “next-gen” is little more than a marketing boondoggle, the idea behind applying machine learning to securing your infrastructure is that analyzing code that looks and behaves similarly to known attacks will allow you to detect and respond to unknown fileless attacks and evasive malware. 

The machine learning models within NGAV software can be based on behaviors, static markers of specific code strings, or examining heuristics, depending on the vendor. The idea is that, because the system “learns” what code is good and what code is bad, then your garden variety NGAV solution can more quickly identify and block the bad code. 

This is, not to put too fine a point on it, utter nonsense. Machine learning algorithms aren’t accurate enough to be valuable for protecting the enterprise from advanced cyberthreats, despite their value in many other industries. More importantly, the nature of machine learning algorithms is such that they can be tricked through including the right amount of “noise” in the deployed malware. 

Understanding Machine Learning and NGAV

There are two main types of machine learning algorithm: supervised and unsupervised. This is distinct from deep learning, which layers algorithms on top of each other to create artificial neural networks. Supervised machine learning requires there to be a human operator deciding what data to feed the algorithm and telling the program what result they want it to provide through labeling the “correct” samples. The desired result in this process is most commonly something like “yes/no” or “true/false.” With supervised machine learning, the algorithm is ultimately connecting the dots to get to the result you want through two processes called classification and regression. In the classification process, incoming data is labeled based on past samples; the regression process allows the algorithm to make predictions based on past data. 

The second kind is called unsupervised machine learning, and involves looking for previously undetermined patterns in a dataset without being told what is and isn’t desirable. The term “unsupervised” is a bit of a misnomer because there’s human involvement on the backend, but the point is that manual involvement is limited. Unsupervised machine learning tends to most often be used when there isn’t a desired result at the beginning. 

Next-generation antivirus software most commonly uses supervised machine learning algorithms; deep learning, although interesting, isn’t yet in broad use among cybersecurity vendors. NGAV vendors collect malware samples and train their algorithms to correctly classify and then predict new strains of malware based on that data. As a result, the basic theme of next-gen antivirus is the logical flow of supervised machine learning. Vendors are the ones training the algorithms, so ideally when a client organization deploys the software, they receive a fully up-to-speed algorithm that can classify and block previously unseen malware based on relating it to past samples. 

On its face, this is a good idea. If you load samples of existing malware into a machine learning algorithm, it can determine in the abstract what future malware might look like. In this way, your organization can be protected from zero days, fileless attacks, and in-memory exploits. Again, though, the reality is that machine learning algorithms simply aren’t that accurate at classifying advanced evasive malware. They are particularly faulty when it comes to identifying zero day attacks which, by their very nature, are not related to previous samples. Given that zero days are 80 percent of successful attacks, this presents a major issue for securing the enterprise against cyberattack.   

Machine Learning’s False Positive Problem

The machine learning algorithms underpinning most NGAV products have a strong tendency to generate false positives because they’re abstracting malware. According to the Ponemon Institute, 56 percent of organizations said that false positives and IT security alerts were the biggest challenge with their antivirus solution. Although Ponemon was specifically talking about signature-based AV, an NGAV tool with a machine learning algorithm underpinning the offering isn’t much better. 

The argument that NGAV vendors make, however, is that of course they’re going to generate false positives because they’re trying to detect malware that’s never been seen before. Which isn’t really a good argument to make, especially for lean security teams who already face a significant burden responding to security alerts in their signature-based AV solution. Why would these teams want more alerts coming into their console that they then have to investigate?

On top of this, the alerts don’t give much detail -- especially when there’s a false positive involved. The machine learning algorithms in NGAV provide a likelihood that the alert is a malicious file, and won’t always tell you what precisely was malicious about the application. As a result, finding out the alert was a false positive takes up a lot of staff time that could be better used triaging actual issues. Alert fatigue, although frustrating, isn’t the most concerning part about NGAV. 

Machine Learning Can Be Fooled

It’s incredibly easy to trick machine learning algorithms. There are a few ways to do this, with the most common involving creating a substantial amount of “noise” in the dataset. Machine learning operates based on analyzing distinct pieces of data. Through the classification and regression process, the algorithm classifies each new incoming piece of data and then relates it back to the starter dataset to see if it gets the expected result. 

In machine learning, “noise” refers to irrelevant information for the algorithm to analyze. If an adversary included a lot of harmless code alongside their malicious code, the algorithm has to eat up time in the classification and regression cycles to get through all the information. This noise is introduced to deviate the algorithm from connecting the dots; in many cases, signatures are a pattern of operations sequences. When noise is introduced in between, the pattern-matching operation can be broken. 

An additional issue is with the supply chain. Appending malicious code to good code and vice versa has the possibility of bypassing the machine learning algorithm completely. This is significant because machine learning algorithms need to be trained based on the environment; it’s enough to break machine learning by putting something new into a legitimately signed process. Some security researchers did this a few years ago to a leading NGAV vendor. They appended code from a video game, which the software had acknowledged as “goodware” to the end of several codestrings in a malicious program. The vendor’s machine learning algorithm then allowed the malware to get through because of its bias toward the good code in the video game. 

Overfitting is also an issue here. The malware samples that NGAV vendors use to train their algorithms often come from specific sources that don’t necessarily match the customer environment. This is especially true as more people are working from home; the corporate environment and the home environment are very different. Even across corporations, the infrastructure is different—manufacturing might have different tech than financial, which is different from services, hardened environments are different from unhardened environments where everyone is an admin, and so on. Overfitting means that something that’s accurate in one environment, isn’t accurate in another; it also means that you have to enlist operational resources to custom-tune the algorithm to your specific environment. 

Machine learning has limitations in the same way any detection algorithm has limitations. Things you can’t see, scan, or monitor will result in a possibility of inaccurate detection. Essentially this presents a “clean” result where there might be an undetected compromised environment. 

Machine learning is great when it’s used to provide recommendations. That said, when you are expecting a binary result of “positive” or “negative,” then machine learning algorithms tend to fall short of providing an appreciable amount of protection from advanced threats. 

Machine Learning Won’t Protect You

Machine learning algorithms are ultimately limited in what they can protect your enterprise from. Specifically, they can’t secure your enterprise against unknown zero day attacks, and are very limited in how well they protect you against fileless attacks and evasive malware. The best algorithm in the world is only as good as the dataset it learns from, and the programmers creating it. The adversaries on the other end of the screen are also humans and have the ability to think of innovative ways that can fool machine learning algorithms into letting their malicious program through. 

New invasion methods are being developed every day. Trickbot in Windows 10 environments fully shifted to being fileless and now introduces innovative infiltration methods, such as you can read in my recent post on the OSTAP downloader. The pace of innovation in malware is so fast that algorithm creators need to constantly be training and updating their dataset to ensure alert fidelity. This is why machine learning-based protection is limited when it comes to zero days and other innovative malware. 

On top of malware innovation, vendors are required to have a run-time detection component. This means machine learning algorithms often require a run-time component as well to constantly analyze the new data being ingested. The algorithm has to operate on your endpoints constantly to potentially secure the enterprise. Any malware that has the ability to disable the run-time detection will automatically bypass any NGAV software. This makes machine learning algorithms inherently unstable when compared to a prevention solution that doesn’t feature a run-time component. Moreover, the attackers have the advantage of choosing the perfect time to execute their malicious code, while the run-time component has the limitation of periodic scanning.  

All these issues combine into a perfect storm of failure points that makes machine learning ill-suited to protecting your enterprise from fileless attacks. Of course, they might prevent a few more attacks than traditional signature-based antivirus, but not enough to be significant. Especially not with the needed run-time detection component. 

You’re better off choosing a solution like moving target defense, which doesn’t have a run-time detection component and can’t be fooled with noise because it doesn’t try to analyze, detect, or monitor before it blocks a malicious program. It just blocks the attack through morphing the application memory and trapping evasive malware, fileless threats and zero days before they can damage your system. Machine learning can’t do that, and it’s why the algorithms won’t save you from advanced evasive threats. 

 

security-pitfalls-of-working-from-home-and-how-to-solve-them