Scanning software containers for security risks can be a double-edged sword. On the one hand, it’s important to understand what you’re deploying to production and know your risks. On the other hand, the results of the scan can be so confusing, even overwhelming, that you might not know where to start.
How do you treat a system made up of several containers that has thousands of vulnerabilities while your deadline is fast approaching?
One common approach is to simply take that huge list and sort it by severity, treating the critical and high vulnerabilities first and foremost. That’s commendable, but you’re likely to find that many critical vulnerabilities cannot be addressed by your dev team.
BCG found that open source software (OSS) powered 75% of public cloud workloads in 2020 while the use of OSS In business is quickly increasing, with 80% of IT departments planning to use OSS in the next 12 months. The proliferation and increasing dependence upon OSS means there are endless vulnerabilities posing risk to your business that your team did not write.
Vulnerability management is a slippery slope
Treating security issues outside of your own codebase is difficult. Sometimes there’s a patch available to treat a problem in a third party library; seldom you may decide to make your own patch and face the risk of future incompatibility.
And we haven’t even touched on undiscovered, zero-day vulnerabilities.
Treating vulnerabilities by severity is one of those good-sounding ideas that provides more comfort than anything else. Even if you treat the obvious critical issues and some high issues, you’re not really reducing the software attack surface because there are likely several more vulnerabilities in the code base waiting to be discovered.
In this article, I want to discuss how prioritizing and remediating security vulnerabilities is a losing battle and show you a more future-proof and rational approach to container security.
What is severity and who determines it?
It’s easy to lose sight of the actual definitions of “critical” and “high” when it comes to severity. Our gut response is probably something like, “Oh no! There’s a critically-severe issue in our code. We must fix it immediately!”
But if someone were to ask what it really means, could you answer?
The severity of a vulnerability describes how much damage can be inflicted if a hacker exploits the product using that vulnerability. It doesn’t mean this is the most critical risk facing your system, which is how many developers mistakenly interpret it. Severity and other information related to a known issue are cataloged in what’s called a Common Vulnerabilities and Exposures (CVE) database. CVEs are tracked by several organizations, including the National Institute of Standards and Technology.
The rubric for scoring severity is called the Common Vulnerability Scoring System (CVSS), “an open framework for communicating the characteristics and severity of software vulnerabilities.” Several factors are considered and classified into three categories: base, temporal, and environmental.
Though vulnerabilities can be classified as low, medium, high, or critical, they each provide the potential for a bad actor to enter and operate in your environment. Hacking is not a linear process; it is a meandering activity that finds chains or paths to a target destination. A malicious attack could begin with a low-severity exploit that provides the opportunity to move laterally and find a more dangerous exploit.
Severity is not an arbitrary rating of a vulnerability, but when an organization is facing a software component analysis (SCA) scanner report containing 7,000 vulnerabilities, they almost always think, “Let’s just take care of the critical- and high-severity problems.”
Accepting the costs and risks of vulnerable software, infrastructure, and expectations
When organizations create software, there’s typically a security-related sign-off by InfoSec/Security and/or Compliance teams. For organizations that implement and use an SCA scanner to understand risk to production, InfoSec will send the curated list of CVEs back to DevOps and ask developers to address them.
The DevOps team will proceed to look for official software patches to fix the software. If patches are available, DevOps needs to work to determine whether those patches cause regression issues with any other portions of the software. This is extremely time-consuming and expensive.
If no patches are available, and the vulnerability poses a serious threat, on rare occasions, developers may have to go back to the drawing board, potentially making architectural changes to their product. Once they deliver a successful patch, they need to ensure that the custom patch is tested for every future release so as not to create compatibility problems with the underlying software or third-party updates. This is extremely time-consuming and expensive.
The dev team might look at the idea of a custom patch and decide instead to use a new package altogether. This hot swap approach almost never works seamlessly, requiring research, implementation, custom development, testing, and an evaluation of whether the new package is worth switching to. This is extremely time-consuming and expensive.
Meanwhile, business executives, customers, sales teams, and other stakeholders are left wondering when (or if) the software will ever get to market. They may even put pressure on the technical teams to deliver a good-enough solution instead of a complete solution. This is extremely time-consuming and expensive.
Do you see a pattern here?
To save time and money, most organizations end up focusing on fixing all—or just a portion—of the critical and high severity issues. It tends to become a negotiated and compromised release where the teams make a trade-off between risk and time to market.
Hackers use POCs for exploits
While dev teams are trying to figure out how to close security holes, hackers are using the industry’s own tools against them. Most hackers—the run-of-the-mill variety, not state-sponsored ones—lack infinite resources and are very opportunistic. They look to security experts for published methods and code (called Proofs of Concept, or POCs) to exploit vulnerabilities.
Why spend time learning to hack when security experts are doing the hard work? Perhaps this makes hackers industrious!
POCs are extremely helpful for software publishers because they provide the visibility into the specific use cases needed to identify the issue and fix their software. Not all vulnerabilities have POCs published, but many do. And when they’re published, it doesn’t take a genius to hack into an unpatched or unprotected system because the POC already provides the roadmap.
Given that it takes an average of 228 days for a company to detect they’ve been hacked, irrespective of entry point or CVE severity, hackers have enough time to cause lots of harm. Once inside, they can move around laterally, eavesdrop on “conversations,” gather passwords, and inflict as much damage as possible.
One could argue that sorting a CVE list by POC availability may be even more effective than sorting by severity alone. In fact…
Determine the probability of a CVE POC with RapidFort Risk Score
Not every CVE has a POC, but we can reasonably predict whether a POC will eventually exist. RapidFort contains a revolutionary AI/ML model that predicts whether a POC will be published for a CVE within the next three months. We’re very excited about it and we call it the RapidFort Risk Score (RRS).
The RRS gives a meaningful signal to security and engineering teams to help them prioritize CVEs in a more intentional and rational way. The CVEs with existing POCs should be treated first, regardless of a CVE’s severity (though we recommend starting with the highest severity CVEs). CVEs with POCs don’t usually make up a large share of vulnerabilities within a container, so it’s a great way to close doors to opportunistic hackers who rely on them.
Any CVE with a POC has an RRS of 100%. CVEs without POCs are evaluated based on many factors, which we use to provide a probability score for a POC. Vulnerabilities with a high RRS indicate that we have enough information to believe a POC will exist in the near future, so that vulnerability should be patched. Security teams can sort a vulnerability list by RRS and have another view of the most important vulnerabilities to treat.
Though we obviously cannot predict the future, we can make some assumptions and improve the odds of protecting an attack for any specific CVE by a factor of 100-200. It’s easy to get lost in a sea of vulnerabilities, and the RRS helps teams make sense of where to start.
Remove malicious code instead of treating it
Organizations don’t realize that their focus on high-severity CVEs alone is misguided. Security risks are not necessarily decreased and security posture is not always improved. In fact, it could do just the opposite: instill a sense of false security that opens the door to potential exploits.
More importantly, severity ratings do not indicate the probability of an exploit due to that particular vulnerability. Just because a critical vulnerability is addressed does not mean the software is secure or protected from a zero-day attack.
RapidFort provides container and software optimization capabilities that are unmatched by any other platform. Our RapidFort Risk Score is just another example of how our platform helps our customers stay ahead of malicious behaviors and risk.