ON-DEMAND WEBINAR: Morphisec's Top 10 Security Predictions - Outlook into 2024
arrow-white arrow-white Watch now
close

What Is Cloud Workload Security?

Posted by Dudu Mimran on September 10, 2020
Find me on:

Defining Cloud Workload Security

Cloud usage is increasing rapidly. Analysts forecast growth of 17 percent for the worldwide public cloud services market in 2020 alone. This proliferation comes on top of already widespread cloud adoption. In a recent report by Flexera, over 83 percent of companies described themselves as intermediate to heavy users of cloud platforms, while 93 percent report having a multi-cloud strategy.

With a growing number of companies planning on doing more in diverse cloud environments, cloud workloads are becoming more common. Over 50 percent of workloads already run in the cloud. This figure is predicted to increase by a further 10 percent within the next 12 months.

As users shift their on-premises workloads into the cloud and transform legacy applications into cloud-native technologies, they’re faced with elevated cybersecurity challenge on top of the existing ones, regardless of how safe the cloud appears. According to Oracle, 75 percent of users feel that the cloud is more secure than on-premises systems.

However, not all users realize that protecting cloud operations requires a different approach than protecting standard physical and virtual endpoints. To ensure strong security, companies first need to understand what they are and what the threat landscape facing them looks like.

What Is a Cloud Workload?

It is the amount of work running on a cloud instance at any particular moment. Whether it's one task or thousands of interactions taking place simultaneously, is the sum of the actions being performed through a cloud instance at once. Ultimately, a workload can be your API server, data processing, messaging handling code, and so on.

protect cloud workloadsA cloud instance can be any form of code execution, whether packaged as a container, serverless function, or executable running on virtual machines. This is true regardless of whether a business is using software as a service (SaaS), infrastructure as a service (IaaS), or platform as a service (PaaS).

These services all create cloud workloads that run on servers dedicated to hosting the application layer of whatever applications are in use. While the servers that power public clouds can run anything, dedicating them to a particular purpose helps them run more efficiently. As companies use cloud-based services more frequently, they create more cloud workloads and increase their security risk.

Cloud workloads have become more prominent in recent years as a result of digital transformation, a need to modernize legacy architecture, and the desire to integrate more concretely with third-party services in a frictionless manner. The push for modernizing legacy architectures is especially acute, as a shift toward the cloud enables companies to adopt platforms that streamline everything from solution development to accessing critical customer data.

The Security Risks of Cloud Workloads

Cloud workloads present distinct security challenges due to the big architectural and conceptual change from perimeter-protected applications, which used to reside in the on-premises data center, to a diverse and highly connected architecture that is mostly out of the company’s control.

A significant cybersecurity threat comes from misconfiguration. Improperly set up access management systems and weak data transfer protocols increase cloud workload vulnerability. Misconfiguration, often the result of rushed cloud migrations, is the cause of nearly 60 percent of all cloud data breaches according to a recent state of the cloud report from Divvy.

Misconfigurations happen frequently because the code of cloud applications change, which requires changes to permissions. These constant changes can lead to configuration fatigue. As a result, many developers relax permissions and ultimately leave a new hole in the attack surface.

Another cloud security weak point is access. Threat actors targeting cloud workloads focus on trying to steal cloud access credentials through phishing attacks. In a recent Oracle study, 59 percent of respondents reported having had privileged cloud credentials compromised in a phishing attack.

Cloud workloads are exposed publicly so they can be accessed via an internet connection. As a result of this exposure, getting credentials can mean “game over” in an attack. The exposed nature is in direct contrast to the difficulty entailed in gaining persistence in an on-premises environment and then ensuring lateral movement of an attack inside an on-premises perimeter. All this combines into ensuring access is strictly guarded within cloud workloads.

The key here is that you don't want malicious code operating inside the actual cloud workload runtime environment, which is where the bad things can happen. Malicious code can enter your workload via three vectors:

  • Supply chain attacks, which means a malicious code was transplanted in one of the thousand packages you use to build your cloud workload stack, hidden in an obfuscated form and just waiting for the right time to decrypt its malicious payload and start running.
  • Through legitimate interfaces that your applications open to the public or to third parties such as web applications and API servers. Vulnerabilities whether in your own code or any of the software packages participating in your application provide an easy path for attacks to exploit and infiltrate your operations.
  • Data handling, which is where many workloads eventually do data processing, including incoming event handling or processing images uploaded by your customers. This data, which is usually beyond your control in terms of quality, is an easy path for embedding malicious code that will exploit the code aimed to process the data and again provide the attacker access into the heart of your operation.

What Makes Cloud Security Different?

Securing cloud workloads has become a shared burden between the customer and the cloud provider. In the "shared responsibility" model used by most public cloud providers, both parties split responsibility between "security in the cloud" and "security of the cloud." The cloud provider (i.e., AWS or Azure) takes responsibility for the security of the cloud (hardware, software, networking, etc.) while the customer remains responsible for securing what happens in the cloud platforms they use.

These boundaries many times remain elusive to customers, who can easily and accidentally think the cloud provider will protect everything. The other extreme is installing measures that the cloud provider already has, turning those security tools into redundant and cumbersome defense layers.

Moreover, client-grade solutions like signature-based AV and even “next-gen” antivirus are inadequate when it comes to providing customers with in-cloud protection. Beyond not being designed to protect against the threats endemic to cloud workloads, AV and NGAV agents are too cumbersome for the cloud.

Read More: Why Client-Grade Technology Won't Work For Cloud Workload Protection

Running most AV agents in the cloud results in higher usage costs on the cloud service; they might be an annoyance on employee computers, but on a cloud workload this usage is unbearable. AV and NGAV tools were quite simply built for a different use case and are just not suited for the job of securing cloud workloads.

A major paradigm change in the cloud defense world is the fact that response operations take a different form. Unlike an employee workstation, which may require containment of files or restoration of the registry, on cloud workloads this is less relevant. Cloud workloads such as containers deal with immutable storage so persistency is less of a threat, files are not being transferred and changed and many times it is easier to shutdown a workload instance and restart it with new hardening rules as opposed to trying to fix it on the fly.

EPPs also lack exploit prevention and memory protection capabilities, which are critical to protecting against fileless threats and in-memory attacks. In-memory attacks are the central theme behind most malicious code threats’ method of operation. Cloud security also requires heightened network security, a necessity for limiting cloud workloads' ability to communicate with other resources.

Further, the common approach of detection and response products such as EDR, MDR and XDR is to chase fileless attacks and always look for signs of infiltration. This approach means defenders are always too late, and must contend with blindspots to evasive activities in memory, which leaves attackers an open playground in runtime. Threat actors can do a substantial amount of harm before getting on the radar of most detection-centric solutions, and well before any incident response can occur from any internal security team or outsourced service provider.

The biggest challenge facing customers, however, is the misleading marketing campaigns about a perfect security solution. Eventually, these products are the same as classic endpoint protection platforms with the added bonus of nice packaging and marketing slogans about how effective they are at cloud security. This noisy environment means CISOs and other tech leaders often struggle to really understand what they’re getting under the hood and make the decision for themselves whether it really fits their challenges or not.

cloud workload protection guidebook

Protection Against Cyberattack

There are a few foundational capabilities required for effective cloud workload security. These capabilities, which should be present in any security strategy, include hardening to achieve zero trust in a pre-production environment, application control and whitelisting for zero trust runtime, exploit prevention and memory protection, and system integrity assurance.

Hardening: Achieving Zero Trust Pre-Production

Key to avoiding configuration-related vulnerabilities is removing unnecessary components from cloud systems and configuring workloads according to provider, not third-party, guidelines. Regular system patching also helps stay up-to-date with identified vulnerabilities.

These activities can be accomplished by many open source tools as well as third party products that specialize in cleaning your code from vulnerabilities before it is launched into production. Having robust pre-production code and stack inspection workflows can reduce your attack surface dramatically and remove the convenience attackers enjoy of picking up an old vulnerability and using it to exploit systems.

Hardening is one of the oldest security practices. Despite this, it has withstood the test of time to stay highly effective at reducing your attack surface and creating a virtual perimeter surrounding your applications. Hardening can be in the form of segmenting your applications from a network perspective, applying zero trust on user IDs and permissions to reduce damage, and limiting application access to unneeded operating system services such as file persistency – depending on the workload functionality.

Hardening is a tedious task when it comes to highly volatile code that changes a lot thanks to innovation practices among your application developers. Regardless, organized hardening practices can reduce the risk of a successful attack a lot.

Part of this involves isolating cloud workloads through proper network firewalling, which removes unwanted communication capabilities between them and other resources. Microsegmentation within cloud networks further reduces both the potential attack surface and the visibility of further network assets.

Application Control and Whitelisting for Zero Trust Runtime Cloud Security

By allowing only authorized applications to run in a workload, application control and whitelisting locks out malicious applications. Application control allows the customer to set clear rules about what can and can’t run within a cloud workload — making it fundamental to security.

This measure tackles the attack vector of malicious programs looking to run inside your environment quite effectively though there is still a wide open gap that is open during the runtime part of the application lifecycle where things that takes place during the execution of a program, such as memory exploitation, evade the whitelisting mechanism. The exposure during runtime is the longest one in terms of your workload lifecycle, as it immediately opens after the workload has loaded and remains open until the workload shuts down.

cloud workload security: Memory Defence and Exploit Prevention

Given that the most advanced threats to cloud workloads run entirely in-memory, securing application memory against attack is crucial for long-term security. The runtime threat is at the core of your weakness and, eventually, regardless of how malicious code found its way into the workload, the last line of defense. The central defense then is the ability to prevent malicious code from running, which takes place in the complex arena of in-memory code execution.

Exploit prevention and memory protection solutions can thus provide broad protection against attacks, without the overhead of signature-based antivirus solutions or even next-gen AV platforms. They can also be used as mitigating controls when patches are not available.

One of the most promising approaches for achieving zero trust at runtime is moving target defense, which eliminates the execution of any code not deemed trusted. This occurs during the loading time following a whitelisting check, thus creating a chain of trust.

sans product review of morphisec - webcastThe technical approach behind moving target defense for achieving zero trust at runtime is via randomizing the memory and OS interfaces at load time using a temporary randomization key -- available only to the whitelisted code residing in the application executable -- that is deleted once the randomization is done. This allows for the system to deem that code as trusted, creating a zero trust memory environment where any other code that invades the runtime environment during its execution lifecycle is deemed untrusted by definition and is deflected.

An additional byproduct of this approach is the ability to accurately identify the threat that was deflected, and then report it to the security team. By doing this, the moving target defense platform allows the security team to achieve situational awareness and enact policy changes that can respond to the threat origins and risk level.

One of the challenges for security architects aiming to protect runtime is the false promises by security vendors evangelizing detection. In practice, runtime detection is always effective after some stages of the attack have taken place. That’s how it works; it needs to “observe” a certain amount of steps of the attack in order to identify it.

This leaves you in an unknown security posture, while the amount of telemetry collected by these agents is exploding as well as posing privacy issues once they are sent to your XDR/MDR/MSSP outsourcer and, most of all, ineffective. There are many things that can take place below the radar of telemetry.

Advanced attackers know how to deceive the solutions based on telemetry by either deleting their traces or injecting false telemetry, which makes covering their tracks easy. Unlike the world of endpoint protection where you have 30 days on average until you uncover an attack, cloud workload customers don't have that luxury. A second too late can mean the whole difference between a breach or not and the assets your workload is exposed to are the ones with the highest risk.

System Integrity Assurance

By checking the integrity of the physical cloud system, both pre and post-boot, system integrity assurance measures and monitors the integrity of workloads while running as well as the integrity of the system itself before boot.

These foundational capabilities can secure clouds against the worst cyber threats. However, the server-based nature of cloud workloads also means that client-grade protection is usually ill-suited to protect cloud technologies against cyber attacks. Client-grade technology is also too heavy for resource-constrained cloud workloads and often fails to secure clouds from zero-day attacks and fileless malware.

This is important because the consequences of a new zero-day attack breaking through to a server – either through lateral movement or infecting the server directly – are more damaging than if the same zero-day infected a desktop or laptop. It's due to this high potential for negative consequences that exploit prevention, which often doesn't come packaged with client-grade technology, is crucial for the servers that underpin cloud workloads.

By applying zero trust to your workload runtime with technologies such as moving target defense, you’re able to deploy an adequate protection solution without straining cloud resource use. Moving target defense prevents memory-based attacks by randomizing the OS kernels, libraries, and applications. In this way, fileless and zero-day attacks are stopped before they damage their servers.

Final Thoughts

As more companies use the cloud more often, cloud protection is becoming a crucial investment area for cybersecurity professionals. While traditional AV solutions aren’t adequate for cloud environments, a zero trust strategy focused on zero trust runtime enabled by moving target defense can provide a capable cloud security solution. With the power of moving target defense to secure application memory and prevent malicious code execution, users can be confident in their cloud security posture.

The threat model behind cloud workload is a new one and totally different than what we were all used to while thinking about protecting assets on-premise. A change from the "magical" detection and response paradigm into zero trust first strategy hardening your pre-production and runtime environments, achieving a solid security posture, and refining it iteratively based on changing threats.

morphisec-guard-save-20-percent-off-what-you-are-paying-for-antivirus