What do you do when your security scan reveals 7,000 vulnerabilities in a single aspect of your infrastructure? Worse, what do you do when you’re using golden base images and you’re still seeing reports of thousands of vulnerabilities within those images?
These are some of the questions keeping security professionals from a good night’s sleep. At some point, you have to ask if there’s any point in remediating tens of thousands of vulnerabilities. Even if you took care of just the critical vulnerabilities, what about the high-priority ones? After all, high priority vulnerabilities can still lead to significant potential damage.
When you write code with open source components for containerized applications, you deal with mountains of problems that you didn’t create and can’t fix.
There’s a much better solution than staying up all night and worrying.
In a recent interview with Security Magazine, RapidFort CEO Mehran Farimani shared valuable insights about securing containerized workloads. Specifically, he offered some solid advice for product and security teams when facing insurmountable numbers of vulnerabilities.
You Don’t Need More Visibility or Vulnerability Testing
There are many great tools available that provide amazing visibility into your code, infrastructure, and workload performance, like SCA scanners and other vulnerability scanners. Some of them are free and provide excellent results. The real question is: What do you do with all this visibility? What are you going to do with the results?
The problem is intractable.
Infrastructure and DevOps teams today have tools to manage operations, automation (CI/CD), Kubernetes management, and so much more. Developers have an overwhelming selection of tools to scan their code, improve runtimes, and understand performance. All these tools are vital for developers to build fast, secure code. But that’s not where the security problems are.
A software development team could write rock-solid code with zero vulnerabilities and maximally-optimized performance, 100% code coverage tests, and even change the world with their work. (Sounds nice, doesn’t it?) But as soon as they need to actually deploy and run that application code, they bundle it with open source components, and their perfect code is now swimming in a sea of vulnerabilities residing in third-party code.
The code gets packaged into virtual machines and containers, pulling in thousands of lines of third party code not controlled by the developer before it’s deployed into production. Once that happens, developers have no visibility into the components that their workload is actually using. The dev team’s perfect, secure code is sitting atop layers and layers of interoperating open source libraries and frameworks in the underlying operating system, which is loaded with critical vulnerabilities the team can’t fix.
Would more visibility help these developers? No. We’re talking about emptying the ocean with a teacup—or, if we want to be generous, a really big pot. Large enterprises have millions of vulnerabilities across their infrastructure. Even in mid-size companies, we see containers in production with literally thousands of vulnerabilities. No matter how big the teacup or the pot, visibility into security issues is only so effective.
Remove Vulnerabilities for Better Container Security
Knowing where vulnerabilities exist is only helpful when you can actually treat them. Unfortunately, a vast majority can only be corrected within an organization’s custom code. Some organizations write custom patches, but contend with open source software package updates that break those custom patches, so there are significant forward compatibility challenges.
There’s no point in playing security whack-a-mole.
It’s more important to understand how a workload functions than it is to understand where vulnerabilities exist. What teams need to know is simple:
- What software components are running in a workload
- What components actually gets used
- What components can be eliminated
- What vulnerabilities exist after unused components are eliminated
Dev, security, and infrastructure teams need less noise and real solutions that reduce the noise. Huge lists of insurmountable vulnerabilities don’t help anyone. Imagine a product manager saying to a DevOps engineer, “We’re looking to push a major new release next month. Our early scans show there are 6,247 vulnerabilities. Can you get those taken care of in the meantime?”
Funny as it may sound, this is the reality many organizations face. Now, what if the product manager were to say, “Our early scans show more than 6,000 vulnerabilities, but it also looks like we can eliminate about 70% of the container’s codebase. That brings us down to about 1,800 vulnerabilities. Can you get those taken care of in the meantime?”
Certainly, 1,800 vulnerabilities is overwhelming, but at least you know they’re actual risks to your production infrastructure. Your remediation efforts can now be directed and your container isn’t just a sitting duck with a huge attack surface that can’t be treated.
Focus on Test Cycles with Container Optimization
We think organizations should spend their time improving test cycles so that their test coverage better resembles production run time behavior. With that in place, it's much easier to determine which components are used. Instead of chasing down vulnerabilities and moving to time-consuming golden base images that are quickly outdated, teams should become more nimble by using software that intelligently cuts unnecessary problems out of the system, and sets a foundation for long-term manageability and success.
Organizations that focus on golden base images are wasting their own time. It’s an outdated methodology that keeps security and infrastructure teams scrambling to keep up with the industry. In the short term, they’re constantly revising these base images because open source software changes all the time. In the long term, they’re contending with the momentum of an organization that has evolving business needs.
Most importantly, they’re preventing the organization from being nimble and agile.
Having a complete test suite is a more effective way of staying nimble and focusing on what’s important. When software has been optimized, an engineering team can be confident in deploying a system that’s efficient and has a minimal software attack surface. It’s a waste of their time to focus on moving to Alpine base images or distroless images—both of which have plenty of built-in vulnerabilities—and spending a ton of time on security.
Developers’ time is very valuable. It’s much better spent building new features and software that addresses customer issues than spending their time aligning with security requirements that become outdated the moment they’re written down.
We believe a new deployment model is needed where developers write high-value code and automated tooling in a CI/CD pipeline eliminates a significant majority of security concerns. Security teams should have insight and input to this process, defining profiles that meet their organization’s security requirements without becoming a roadblock or adding friction to the SDLC.
Get Started with Software Optimization
Our new deployment model is not a far-fetched idea. In fact, it is available today to any forward-thinking company that wants to get away from outdated software and infrastructure security methods.
RapidFort is the only product offering an improved Software Bill of Materials (SBOM)—which we call a Real Bill of Materials (RBOM)—that provides a thorough breakdown of unused components in a workload, machine learning-driven component removal estimation, and automated optimization based on user-specified profiles. It can plug directly into any existing CI/CD pipeline and optimize containers with minimal overhead.
We want the whole world to be vulnerability testing and performing container optimization (studies show only 48% of companies scan their infrastructure whatsoever!), so we are offering our container and workload scanning capabilities for free.