Every DevOps team looks at new tech with the hope of shipping faster without any downsides. But just like most things in life, the conveniences offered by container technology have their tradeoffs. Containers are a fantastic technology, but they present new security challenges that teams must address with diligence.
Container hardening, also known as image hardening, enhances security by removing unnecessary components. While this sounds simple in concept, it’s actually quite complex. Implementing Docker security best practices, like keeping images updated and containers small, requires a substantial amount of time, money, and expertise. Few organizations have the resources or maturity to pull it off.
There are tools available to help ease the container hardening burden on development teams. RapidFort takes care of the complicated hardening tasks automatically while providing enhanced visibility and increased security.
What is container hardening?
Container hardening enhances the security and resilience of a software container image. This is done by analyzing its current security status and making improvements by reducing the potential attack surface in various ways.
Hardening a container removes a container’s unnecessary software, services, and configuration settings that attackers could exploit. Part of the process involves deleting unneeded files and removing unused components. Proactive tactics of container hardening include applying software updates, while broader security measures like configuring firewalls protect the environment as a whole.
The container hardening process gives teams confidence that they’re operating in a secure environment without compromised code or components.
Docker Security Issues
Docker is a powerful, popular containerization tool teams use to deploy and run applications in an isolated environment. Just like any piece of connected technology, Docker has known risks that can be mitigated by following Docker security best practices.
- Base image vulnerabilities: Docker images rely on base images that may contain vulnerabilities an application container can inherit.
- Microservices: These increase data traffic and the complexity of network and access control.
- Privilege escalation: Root privileges make it possible for a container attacker to gain root access to the host system.
- Container misconfiguration: Misconfigured containers can cause unauthorized access to sensitive data.
- Short life spans: Monitoring containers during runtime can be difficult, and lack of visibility into a changing container environment adds more security concerns.
By following security best practices for working with Docker containers and using container hardening tools, teams can successfully implement Docker into DevOps workflows. RapidFort can help teams address base image and microservices vulnerabilities.
10 Docker Security Best Practices
1. Only Use Trusted Images
Many third-party registries lack governance policies for image storage, so ensuring image trust can be challenging. Teams should know what images are available for use on the Docker host, understand image origin, and review image content. It’s not uncommon for widely- and freely-available Docker images to contain malware.
For image verification and to install only verified packages into images, use Docker Content Trust (DCT). RapidFort also provides free hardened images for a wide range of popular containers.
2. Keep Images Updated
Ensure container images are updated to the latest security patches, or employ a container hardening tool to address these vulnerabilities directly without patching. Patches can sometimes introduce their own security risks, or they may not be backward compatible, creating more work for the development team.
Instead of patching, RapidFort removes the components that contain the vulnerability. Doing this removes the need for patching for 80% of the time.
Additionally, it’s important to keep the host and Docker engine updated. Because containers share the kernel with the host system, any kernel-related exploits directly impact the host kernel.
When building containers, it’s critical to keep all packages updated in addition to updating images, which is more associated with container maintenance.
3. Container Resource Configuration
Control Groups, or cgroups, is a feature of the Linux kernel used for resource allocation and container limitation. Control Groups are designed to ensure each container gets a fair share of resources–like memory and CPU–while also making sure it isn’t possible for one container to exhaust the system.
Docker allows containers to use as much CPU and RAM resources as the host kernel allows by default, so limiting the resources a container uses can benefit security and performance.
4. Keep Images Small
Reducing the number of system files and applications in a container image means there are fewer points susceptible to attacks and reduces available horizontal network movement options. However, this method requires a significant amount of discipline within a development team and considerable time and investment. This approach can be out of reach for organizations based on their size and resources.
By implementing a container hardening solution, teams both large and small can more easily keep images small by eliminating unused components and related vulnerabilities. A minimal base image yields better performance - boots faster & consumes less resources.. This minimizes risk and enables teams to deploy more secure applications.
5. Restrict Network Port Accessibility
During container creation, it’s common to allow access to network ports for troubleshooting or debugging. Once the image moves into production, it’s critical to remove access to these extra network ports.
The -p parameter in the Docker command line can set strict limits on post mapping from host to container. Ports not specified in the command will be inaccessible, even if exposed by the Dockerfile.
6. Rolling & Immutable Tags
A Docker tag uniquely identifies a Docker image and allows users to deploy a specific version of an image. One Docker image can have multiple associated tags. Every time a new Docker image is published, it’s best practice to also update the image’s tags to ensure users get the latest version.
Though a rolling tag may not always point to the same image, rolling tags are best practice in container development. This makes it easier to use a specific version of a development tool and gives teams confidence everyone is using the latest version.
An immutable tag is a static tag that always points to the same image, which can be useful when teams depend on a specific image revision. For containers in a production environment like Kubernetes, it is best practice to use immutable tags. This will ensure the deployment won’t be affected if a new revision breaks existing functionality.
7. Non-Root Containers and Images
There are two types of container images: root and non-root. It is best practice to use non-root images on production environments because they include an additional layer of security. In this case, however, privileged tasks like editing configuration files or modifying network information may be denied for non-root users.
Non-root containers must be configured for one main purpose, and are preferred for enhanced security and expanded platform compatibility.
Docker containers run as root users by default, meaning that anyone with root access to the container can perform any task, regardless of privilege level. This makes them less secure. There are some disadvantages that come with using non-root containers for local development, including permission conflicts and issues with specific services or utilities. But in most cases, to run as root can introduce security risks.
8. Execute One Process per Container
Containers should only execute one task at a time.
While teams may find themselves in citations when two or more processes have to be executed simultaneously in the same image, it is best practice to keep containers as clean and modular as possible. Applications can be decoupled into multiple containers for easier scaling and container reuse.
9. Leave Your Data Out of the Container
When building a container’s Dockerfiles, it’s typical to have API secrets, log files, and other data that won’t be in the final container image.
Use the .dockerignore file to leave out files or directories from the build process to prevent them from being included in the build context. Fewer components reduces the number of attack vectors.
10. Reduce Network Exposure
Containers should have a narrow network link access to the public internet to reduce breach attempts. Proper network segmentation and application architecture can prevent the security risks of deploying containers into public-facing web servers.
Never expose the Docker daemon socket. This will provide root access to the host because the socket is owned by the root user by default.
How to Implement Docker Security Practices
Scan the Image
The first step to hardening a Docker image is to scan and analyze the chosen base image for vulnerabilities.
By running the image against a chosen image scanning tool, teams can identify issues and obtain a baseline from which to layer on additional protections.
Review the Analysis
Next, determine which issues are high-priority security concerns and which can be overlooked or postponed. Not every result from the scan will need immediate action.
Heavy base images, like those from popular operating systems or programming frameworks, will present a number of common vulnerabilities and exposure (CVEs). Robust image hardening tools offer CVE severity reports and CVSS scores, or other measures of risk such as the Rapid Risk Score (RRS) that teams can use to prioritize issues by urgency.
Implement Mitigation Layers
Now it’s time to apply protections to the Dockerfile. Create a new Dockerfile intermediary that sits between the base image being hardened and the downstream application image.
This may also involve tasks like updating outdated packages, patching config file problems, and applying mitigations to resolve CVEs from the previous step.
RapidFort’s sophisticated container hardening tool automatically rebuilds the Docker image after unneeded files have been removed during the hardening process. This gives teams full confidence in the security of their container without performing additional tasks.
Manually Scan for Other Vulnerabilities
Automated scanning can’t find every security risk. Some vulnerability classes won’t be matched by an image analysis. Downloading binaries in a Dockerfile can introduce malicious code. It’s an inconvenient fact.
However, regular manual review of Dockerfiles will reduce file susceptibility and low-level weaknesses.
Docker Security with RapidFort
Following these security best practices sounds good in theory, but realistically, these measures are out of reach for many organizations.
Reducing the size of images and installing security patches to fix a range of vulnerabilities is time intensive and requires an extremely mature DevOps team. Facilitating this process efficiently and safely is complex and expensive.
RapidFort eliminates these manual, costly steps. Teams of any size can easily scan for vulnerabilities, harden the image automatically.
Scan one or more containers from the command line and receive deep package analysis for every container component. Continue using the best open source code without worrying about manual patching or raising security red flags.
By identifying and removing unneeded components, RapidFort’s container hardening solution drastically reduces vulnerabilities. Container consolidation, detailed component visibility, and reducing patch management needs lead to faster load times and enable teams to focus on building.