Every production tech stack today uses open source software components containing known and unknown risks and security vulnerabilities. That’s the accepted risk of doing business on the internet. There is always a balance between cybersecurity, speed, and cost.
However, as the result of a United States Air Force Research Grant, RapidFort has established that 50-90% of software components are unused. That means there’s plenty of superfluous code available for an opportunistic hacker to leverage. Worse, it takes 228 days for the average security breach to be discovered. Once a hacker is in, they have plenty of time to look for vulnerabilities to exploit enabling lateral movement.
That’s a lot of opportunity and territory for malicious actors to live off the land—and it’s largely preventable!
It starts with understanding the inventory of components. Companies need to examine the software components in their infrastructure and manage their software supply chain and container security. Software supply chains create systemic risk that arises from using software components or applications not developed internally. If you’re using containers, you’re using open source software that, probably, was not developed by your development teams.
Several container risk mitigation strategies exist today, including Golden Base Images, Static/Dynamic Application Security Testing (SAST/DAST), policy agents, and container slimming. The best option available today, and the one we’re betting our futures on, is SCA scanning followed by optimization that automates vulnerability removal. Let’s explore!
What is SCA Scanning and why is it important?
In its simplest terms, Software Composition Analysis (SCA) Scanning tells you what’s in your workload. It’s a scanning process that looks at your container metadata, lists all your components, and cross-references them with license and vulnerability databases. It’s a process that results in a Software Bill of Materials (SBOM), a “food label” list of ingredients in your container
SCA Scanning is related to vulnerability scanning: one identifies the components, the second identifies the vulnerabilities in those components. In addition to creating an inventory of software packages, you can see which vulnerabilities are in each package. This information is critical for understanding known weaknesses in your infrastructure and provides prioritization guidance for remediation efforts.
It’s very important to run SCA scans because you are flying blind without them and federal regulations require them. Additionally, scanning all your infrastructure before and after it’s deployed to production is considered good software development hygiene and a basic cybersecurity practice.
Here is the clear shortfall in scanning: most companies only scan on build and deploy. So they scan on Monday when they build, and Tuesday when they deploy, but what happens when a new vulnerability on Wednesday is discovered?
SCA scanning is just the first part of building a secure SDLC, bolstering your security posture, and managing your software attack surface and supply chain risk.
What are the differences between SCA Scanners?
Not all SCA scanners are created equal. Here’s our list of characteristics that demonstrate whether you’re running a high-quality scanner:
- Visibility: Does your scanner find all the components in your images?
- Depth: Are you scanning the entire contents of your container? Are you scanning the OS? Custom code? OSS components?
- Accuracy: How many false positives or negatives are detected?
- Linking components to external information that is actionable: What information do you get from your scans and from where? And what do you do with this information?
- Scan speed: Will a scan take 2-3 minutes? Or 15 seconds?
As you might expect, there are a lot of players in the SCA scanning space. They compete around reducing noise: minimizing false positives or negatives; total packages correctly identified; or vulnerabilities associated with your packages. Many scanners used to use proprietary vulnerability databases, selling customers on the idea that one vendor’s proprietary database is better than another.
Regardless of vendor, scans are often so noisy that few of them are actively reviewed for actionable insights. In fact, that’s one of the fundamental limitations of this technology: there’s not enough actionable information. You get a long list of components and it’s unclear what you should do about it.
What should I consider before buying an SCA Scanner?
Here’s our list of of factors to consider when purchasing an SCA scanner:
- Tech stack compatibility: Does it work with Linux? Windows? Or all the technologies you're running?
- Database coverage: Which databases are being used and how much of your infrastructure can they cover?
- Standalone vs. bundled: Are you buying one product or something much larger?
- Pipeline workflows: Can reporting be customized to fit your organization? Are there hooks to pull information from CI/CD?
- Dashboards and Reporting: Can your entire organization see the scan results, or do you need to pipe data into a separate dashboarding tool? Is it made available in a consumable way that becomes actionable?
- Does the scanner result in actionable alerts?
- Audit trail support for compliance: Will your scan logs meet your audit requirements? Or will you need to build something custom?
Vendors differentiate themselves on traits like automatic vulnerability prioritization, frequency of updated vulnerability databases, and secrets detection in codebases. Some even offer automatic patch remediation, but this feature has the potential to disrupt code integrity, so its popularity has waned. The desire to differentiate scanners has led to features like merge confidence ratings, which shows the likelihood of a breakage from an automated patch.
The scanning market is filled with features that sound great on paper but are not accurate enough (or practical) to rely on. So, confidence breaks down and utilization drops over time. Even companies with disciplined, long-term practice in scanning have to continually drive for compliance over time, which is a sophisticated process to manage.
But none of these points illustrate the real issue with scanning: low adoption rates and a lack of trust in the scan results.
Why don’t more people use SCA Scanners?
The Linux Foundation reports that only 48% of members use scanners (but 88% are planning on using them by 2023). Despite enormous growth and interest in the space, very few security professionals know what to do with a scan result once they have it. It can be exhausting to think of remediating tens or hundreds of thousands of vulnerabilities across a large infrastructure to secure a large production application.
There are some high-performing companies that employ and act upon SCA scan results. Typically, a security team reviews the various software components to determine what is authorized to run. Unapproved components are reported and escalated to determine why they’re being used, and SCA lifecycles are established to continuously reduce unused components and attack surface.
The flip-side of the 48% number is that 52% of companies aren’t doing any scanning at all! They don’t have engineering bandwidth and can’t afford to establish disciplines around scans and remediation. Then they get so many false positives that they give up.
It’s too much for many organizations to handle. It can be overwhelming and create a sense of futility.
At these less-mature organizations, scanning is typically only done at build and release. Once a container is built and scanned, then deployed and scanned, it can run for months without any proactive protections of what’s actually running in production.
Though OSS is trusted nearly universally, there’s a problematic lack of trust in scanning or an organization’s ability to establish a scanning practice. These tools don’t easily answer the question: Other than ensuring I’m not using dangerous components, how does SCA scanning make me safer?
What are best practices around SCA Scanning?
The best way to employ SCA scanners is to develop a technique where your containers have a maximum lifespan of 30 days. This requires frequent scanning in pre- and post-production, which means scanning should be an integral part of a secure software development lifecycle. Registering SBOMs is also critical so alerts are triggered when vulnerabilities are discovered between scans.
Most real-world use of containers shows short container life expectancy, which is promising from a security requirements perspective. However, there’s no feedback loop that feeds back from production.
The recent Log4j issue is a perfect example. Everything looked great in pre-production and in production—until it didn’t. Many companies running Log4j didn’t know what to do because they don’t even know where or how many instances it’s running in production.
Another best practice is to scan continuously and employ different tools over time. Don’t get comfortable with one vendor’s scan results. There are many options on the market, and new ones crop up frequently, some of which are entirely free. Older scanning companies use dated techniques that may report 10 vulnerabilities whereas new ones will report 120.
There’s a big difference between old and new scanning tech, so it’s best to continuously try new scanners and see what you can find. You can test this yourself using RapidFort's free scanner, then compare the results with your scanner and see the deltas.
Get started with industry-leading scanning—for free
RapidFort’s scanning tool is free for everybody, forever. Though there are several free OSS options out there, they do not offer hosted, feature-rich scans that allow you to scan unlimited containers. Nor do they integrate as easily with your existing CI/CD stack, and few of their reports are downloadable and shareable. RapidFort’s scanner will save you time by quickly plugging into your CI/CD pipeline and eliminating the need to host and maintain your own OSS instance.
Our scanner identifies vulnerabilities and our profiler allows you to remove unused components that can reduce the number of vulnerabilities by up to 90%. Removing unused components is the key ingredient in making scanning more effective and actionable because it legitimately and measurably makes your infrastructure safer.
Remediations don’t need to be performed for software that no longer exists in your containers. Less code. Lower risk. Fewer Problems.
We’d love to talk to you about scanning strategies, software optimization, and container hardening best practices. Just reach out to us and we’ll get right back to you. In the meantime, we encourage you to request access for a free trial.