On a scale of one to ten, how effective is the status quo approach to server security?
In theory, it should be ten. The path to keeping servers protected from the outside world (segmentation, firewalls, vulnerability patching, security solutions etc.) is well known.
However, real life results show a dramatic distance from theory. From the Red Cross to Uber, this year's news headlines are filled with examples of server security breaches. Exploiting internet facing services was the primary attack vector in 2021, accounting for over 50 percent of incidents.
In the financial sector, servers were involved in 90 percent of breaches. In CISA’s list of top exploited vulnerabilities for 2021, server exploitation techniques comprise the majority of vulnerabilities, which include Log4j, weaknesses in Microsoft Exchange, Active Directory, FortiOS, Accellion FTA, and others.
Server security is obviously not working as it should. Instead, we live in a world where advanced threats are finding a security gap. They’re slipping past firewalls, next-generation antivirus (NGAV) solutions, and endpoint detection and response (EDR) defenses that are supposed to be impregnable. These tools manifestly aren’t addressing the weakness of real-world server environments.
In 2019, Microsoft estimated over 60 percent of Windows servers still used Windows 2008, which was due to become End of Life (EOL) in 2020. Windows 2012 fell out of support in 2023.
EOL Windows and Linux operating systems and the legacy systems they are deployed on are everywhere. They are also practically irreplaceable and often power critical processes. Protecting them with scanning security solutions is almost impossible. EOL and legacy servers either cannot tolerate the compute requirements of modern EDR and NGAV solutions, or are simply incompatible with them.
75 percent of attacks exploit vulnerabilities that are over two years old. Why do server vulnerabilities go unpatched? Many organizations are forced to make a trade-off between server uptime and patch lag.
Servers powering critical operations or locked into strict service level agreements can’t afford patching-related downtime and disruption. If patching is prohibitively expensive or effectively impossible, critical servers are inevitably exposed to well-known exploits.
EDRs and other cybersecurity tools have non-trivial CPU/memory and internet bandwidth requirements to function effectively. Servers running critical applications or powering virtual machines are sensitive to these resource requirements.
As a result, deploying security solutions like EDR can require further investment to upgrade server hardware or increase cloud capacity, especially if they aren’t purpose-built to address Windows and/or Linux servers.
Security solutions deployed on servers have to play a delicate balancing act. They can’t:
These limitations mean solutions like EDR that rely on probabilistic scanning and detection can only do so much.
Vendors and security teams can tune EDRs to spot likely malicious code or activity at a certain level of accuracy and speed—but only up to a point. If you turn up an EDR’s sensitivity to maximum, the servers it protects become essentially unusable due to performance degradation and the volume of false positive alerts.
This dynamic results in a security gap.
To start with, scanning solutions' limited ability means server memory is an undefended environment for most organizations. As a result, servers are vulnerable to fileless and in-memory attacks. According to the Picus 2021 Red Report, these attacks make up three of the top five most common MITRE ATT&CK techniques seen in the wild.
Say an EDR spots a server compromise and a security team takes action. It's typically a question of how much damage has been done, rather than whether the attack has been stopped.
This response lag gives cybercriminals more than enough time to pivot attacks and compromise their targets. On average, it takes organizations 212 days to detect compromise.
Servers are a constant source of risk, but the solutions vendors provide to secure them don’t do enough to reduce it.
From application vulnerabilities to legitimate processes occurring in memory, servers face a range of threats targeting different parts of their environments. This means no organization can rely on a one-stop solution for reducing server breach risk. An EDR platform isn’t enough for effective server defense.
Instead, servers should be defended by layers of best-of-breed solutions.
Although EDR plays a crucial role in cybersecurity, it's not always feasible, and doesn’t stop all the attacks servers face. While effective against known threats, by definition, probabilistic scanning-based solutions like EDR will miss evasive malware, creating a security gap.
The best way to secure servers is to use a defense-in-depth strategy. This starts with fundamental security hygiene. Servers need to be configured carefully, and access to them controlled and restricted. To stop known threats, best-in-breed NGAV and EDR are also important.
To defeat the threats these solutions miss and protect the legacy servers they ignore, augment NGAV and EDR with Automated Moving Target Defense (AMTD) technology. Recognized by Gartner as an impactful emerging technology, AMTD morphs the runtime memory environment to create an unpredictable attack surface, blocking threats deterministically and proactively, rather than probabilistically and reactively.
AMTD: