There’s an overwhelming number of tools available for managing container security vulnerabilities, which is great, but there’s no silver bullet. An enterprise production workload can easily contain millions of CVEs, putting pressure on security and application teams to prioritize and remediate. It’s unrealistic to expect an individual team to handle more than a few dozen CVEs, so they resort to tools like application security testing tools (SAST, DAST, and IAST).. These are great for custom code running in a container, but they don’t do much to identify and reduce the OSS vulnerabilities and overall software attack surface area.
Software and security leaders will also advocate for “shift left” strategies, which certainly help teams catch vulnerabilities earlier in the SDLC. However, shifting left can have its own problems, like high costs and organizational buy-in and adoption challenges. Often large organizations have millions of vulnerabilities in their infrastructrure, which can’t simply be patched one-by-one with developers. Its overwelming. Ultimately, the burden of fixing issues, even those detected earlier in the SDLC, rests solely on the over-leveraged dev team and this is not a scalable or long term solution
Another tactic used by dev organizations is patching, but as we’ll demonstrate, patching a production workload is like duct taping a boat with hundreds of holes. Again, there are simply hundereds of patches to apply and each introduces a degree of risk so that considerable testing needs to take place post patching.
Patching is important and difficult
Because software is dynamic, connected, and constantly evolving, it is riddled with uncountable dependencies. Open Source is the future, it has profound performance, functionality, and cost benefits. OSS components are helping companies get to market faster than ever, but it’s practically impossible to pull in only necessary functionality from OSS components. It’s all too common to pull in a 2MB library for 6KB of code. Unless the company is running regular Software Composition Analysis (SCA) scans and generating SBOMs, they may have no idea about the security risks that accompany the 2MB library. And if they do, its unclear how to prioritize and implement a viable patching regimen.
As a result, the more OSS components in a workload, the more bloated it becomes. As far as we’ve seen, 50% to 90% of most workloads consist of unused code. Unfortunately, there are only a few options to mitigate software bloat, especially for OSS components.
Some companies are responding with statically compiled languages so the compiler itself can ensure only the most relevant custom code components are included in a build allowing greater control. This is not easy to do. It presents a set of different challenges like language lock-in, increased complexity, and a smaller talent pool. It doesn’t do much for the OSS component bloat.
It also takes discipline and expensive resources to manually identify unnecessary OSS components and remove them post build. OSS projects are constantly changing, so tracking dependencies and removing what’s unnecessary is often completely unrealistic. Dev teams can’t be expected to compete with package managers and manage millions of lines of code.
However enterprises are approaching the challenges of OSS vulnerabilities, they’re not exactly flying blind. There’s too much risk in deploying unscanned code to production, so enterprises are at least running some form of security scans to find vulnerabilities. Scan results are then used for a risk assessment and careful negotiation of “what do we fix?” and “what is acceptable risk?” Unfortunately, not all vulnerabilities can be fixed.
There’s no avoiding the need to patch OSS, but what is an organization supposed to do when an OSS project hasn’t patched a critical vulnerability? Wait weeks—or potentially months—until the project’s volunteer developers find the time to fix it? Most OSS projects don’t have enterprise-grade support or security, leaving corporate dev teams to try write their own patches which introduces another set of risk and complexity. It takes time to look through the OSS code for determining feasibility and understanding code they did not write.
Because open-source is done by a generous set of maintainers providing their time and skills for free, they don’t have the time to do extensive testing on patches. So even if a patch does become available, dev teams need to determine whether it’s backward compatible with the existing components which will require testing and validation. Depending on the complexity of the patch and any new dependencies that arise, testing and validation can be very time consuming and expensive.
Virtual patching is a solution of compromise
There’s usually a waiting period between the announcement of a vulnerability and the release of a functional, well-tested patch. Sometimes it’s hours, sometimes it’s days, and sometimes it’s months. But just because a patch is available doesn’t mean it gets deployed right away. This is where a virtual patching strategy fits in.
Until a vulnerability can be fixed with code, a virtual patch is applied at the network layer, usually with a Web Application Firewall (WAF). Nothing is actually “fixed” in a virtual patch, but it reduces the ability to reach the vulnerability, and it may buy some valuable time. Virtual patch teams set up proactive monitoring and network- or system-level protections to specifically prevent or monitor a potential exploit. They generally give the appearance that a vulnerability has been remediated, even if it hasn’t. And again, sometimes this introduces production failure risks if the virtual patch has unintended consequences.
Malicious actors are smart enough to know about virtual patches, so they will continue to attack, even if it appears to have been patched. If the virtual patch is well-designed, alerts would be raised that exploit activity is detected.
Virtual patch tools are very difficult to build and effectively implement. They need to provide deep network traffic inspection and “look around the corners” for compromises, exploits, and malicious activity. Virtual patches need to be very carefully designed, tested, and configured (i.e. “expensive and time-consuming”). Because of their low-level network integration, it’s possible to bring down the entire network with a small mistake or misconfigured virtual patch. As the old saying goes: with great power comes great responsibility.
Runtime Application Self Protection (RASP)
While WAFs and virtual patches provide protection at a higher level, RASP provides protection to individual applications. It requires each of the applications to be instrumented to provide deep insights into the application layer. RASP’s visibility into application behavior prevents existing vulnerabilities from being exploited. However, the downside of RASP is that the instrumented code imposes significant overhead in production and could affect performance. In addition, each app needs to be instrumented and tested, making it expensive to implement for all apps other than a few critical ones.
Eliminate vulnerable code, then patch
Every type of patching is expensive and time-consuming. OSS software bloat has led to 50-90% of container components being unused in production workloads. Companies are likely wasting their time patching code that doesn’t even enable functionality in their apps!
Most modern tooling, scanners, and techniques are only helpful because they provide visibility into workflow vulnerabilities. Even an SBOM has limited use. Sure, there’s a ton of great information in an SBOM, but what do you do with the results? Sort and prioritize a list of thousands of vulnerabilities in the hopes that all the critical- and high-severity CVEs can be resolved? It’s too much to deal with.
The best thing to do for application security is to eliminate all the unused components before thinking about patching. In our experience, up to 80% of the vulnerabilities can be removed by the simple hardening of an application container. In many cases, we’re able to remove most or all of the critical and high severity vulnerabilities in a single sweep.