CIS Benchmarks. Configuration Hardening Verified Continuously.
CIS Benchmark Overlay
Automated CIS Benchmark scanning across operating systems, cloud foundations, containers, databases, and web servers. Level 1 and Level 2 profile support. Continuous configuration assessment with drift detection. Benchmark compliance mapped to CIS Controls safeguards and NIST 800-53 controls. Immutable evidence from every scan.
CIS Benchmarks
Configuration hardening is measurable. CIS Benchmarks define how.
CIS Benchmarks provide consensus-based configuration guidance for operating systems, cloud platforms, containers, databases, and web servers. Each benchmark specifies hundreds of configuration checks organized into Level 1 and Level 2 profiles. Redoubt Forge applies these benchmarks as overlays on your base framework, scans your infrastructure against every applicable recommendation, detects configuration drift in real time, and maps results to the controls they satisfy. Configuration hardening is not a one-time activity. It is a continuous verification cycle.
CIS Benchmarks are prescriptive configuration guidelines published by the Center for Internet Security. Unlike high-level control frameworks that describe what to protect, benchmarks describe exactly how to configure specific technologies to reduce attack surface. Each benchmark is developed through a consensus process involving security practitioners, technology vendors, government agencies, and academic researchers. The result is a detailed document containing hundreds of configuration recommendations organized by technology area: account policies, audit settings, network parameters, file system permissions, service configurations, and registry values. Each recommendation includes a description of the security rationale, the specific configuration change required, audit procedures to verify compliance, and remediation steps for non-compliant systems. Benchmarks are versioned and updated as technologies evolve, new attack vectors emerge, and vendor defaults change. Every recommendation is assigned to one of two profiles. Level 1 recommendations are practical hardening measures that can be applied broadly with minimal operational impact. They represent the baseline configuration posture that every organization should achieve. Level 2 recommendations provide defense in depth and are intended for environments where security requirements justify the potential operational impact.
The relationship between CIS Benchmarks and CIS Controls is structural and intentional. CIS Controls define the prioritized set of safeguards that organizations should implement to defend against the most common attack patterns. CIS Benchmarks provide the specific configuration guidance that implements those safeguards on real technology. CIS Control 4 (Secure Configuration of Enterprise Assets and Software) explicitly calls for configuration standards based on industry-recognized hardening guides. CIS Benchmarks are the hardening guides that Control 4 references. When an organization applies the RHEL 9 benchmark's recommendation to disable unnecessary services, that configuration change directly implements Safeguard 4.8 (Uninstall or Disable Unnecessary Services on Enterprise Assets and Software). When an organization applies the AWS Foundations benchmark's recommendation to enable CloudTrail in all regions, that configuration change implements Safeguard 8.2 (Collect Audit Logs). The mapping is explicit: each benchmark recommendation references the CIS Controls safeguards it supports. This traceability means that benchmark compliance is not an isolated activity. It is a measurable, auditable implementation of the controls your base framework requires.
CIS Benchmarks are adopted globally by commercial enterprises, government agencies, defense contractors, healthcare organizations, financial institutions, and critical infrastructure operators. The Department of Defense references CIS Benchmarks alongside DISA STIGs as acceptable configuration baselines for certain system categories. FedRAMP authorization packages frequently cite CIS Benchmark compliance as evidence for configuration management controls. PCI-DSS explicitly requires hardening standards for all system components, and CIS Benchmarks are the most commonly referenced source for those standards. HIPAA Security Rule administrative and technical safeguards for workstation and device security map directly to CIS Benchmark recommendations for operating system and database configurations. The breadth of CIS Benchmark coverage across technology categories (operating systems, cloud platforms, containers, databases, web servers, network devices, mobile platforms, and desktop software) means that a single organization's infrastructure typically falls under multiple benchmarks simultaneously. Managing compliance across all applicable benchmarks requires automated scanning, centralized evidence collection, and continuous monitoring. Manual verification at the scale these benchmarks demand is not sustainable.
CIS Benchmarks function as configuration overlays within Redoubt Forge. An overlay does not replace your base framework. It modifies, extends, or specifies parameter values for the controls your base framework already requires. Consider NIST 800-53 control CM-6 (Configuration Settings): the control requires that organizations establish and enforce security configuration settings for information technology products. The control does not specify what those settings should be. CIS Benchmarks supply the specific parameter values. The RHEL 9 benchmark specifies that password minimum length must be 14 characters, that SSH root login must be disabled, that audit log storage must be configured to prevent data loss, and that specific kernel parameters must be set for network security. Each of these specifications fills a parameter slot in CM-6 that the base framework intentionally leaves to the implementing organization. The overlay relationship is compositional: your base framework defines the requirement, and the CIS Benchmark overlay defines the implementation detail. This structure allows multiple overlays to operate on the same base framework simultaneously without conflict.
The composition engine in Redoubt Forge applies CIS Benchmark requirements alongside other overlay types on the same base framework assessment. An organization pursuing CMMC Level 2 with DISA STIGs and CIS Benchmarks has three layers: the CMMC practices define the security requirements, the STIGs provide DoD-specific configuration requirements for applicable technologies, and the CIS Benchmarks provide additional configuration coverage for technologies that STIGs do not address or where CIS provides complementary guidance. The composition engine resolves overlapping requirements by applying the most restrictive parameter value. If a STIG requires a minimum password length of 15 characters and the CIS Benchmark requires 14, the platform applies 15 and satisfies both simultaneously. If a CIS Benchmark covers a technology area that no STIG addresses (such as a cloud platform foundation or a NoSQL database), the benchmark stands alone as the configuration specification for that technology. The engine tracks which overlay satisfies which base framework control, ensuring full traceability from the specific configuration setting through the overlay to the base framework requirement it supports.
Rampart tracks Level 1 and Level 2 profile compliance independently for each applicable CIS Benchmark. This distinction matters because the two profiles serve different purposes and carry different operational implications. Level 1 is the expected minimum for every organization. Achieving full Level 1 compliance across all applicable benchmarks demonstrates that the organization has applied fundamental configuration hardening without introducing operational constraints. Level 2 compliance demonstrates defense in depth and is appropriate for systems processing sensitive data, operating in regulated environments, or exposed to elevated threat levels. Rampart displays compliance posture per benchmark, per profile, and per technology category. An organization might achieve Level 2 compliance on its RHEL servers and Level 1 compliance on its database tier. Rampart surfaces this distinction rather than collapsing it into a single aggregate number. Assessors and auditors can examine profile-level compliance for each technology, drill into specific recommendations that are not met, and trace each recommendation to the base framework control it supports.
Operating system benchmarks form the foundation of CIS configuration hardening. The attack surface of an unhardened operating system is substantial: default service configurations, permissive file permissions, disabled audit logging, weak authentication parameters, and unnecessary network services all present opportunities for exploitation. CIS Benchmarks for RHEL (versions 7, 8, and 9), Ubuntu (20.04 and 22.04), Amazon Linux (2 and 2023), and Windows (Server 2016, 2019, 2022, Windows 10, and Windows 11) each contain hundreds of configuration recommendations organized into categories. Account policies govern password complexity, lockout thresholds, session timeout values, and privilege escalation controls. Audit policies define which events are logged, where logs are stored, how long they are retained, and what happens when storage reaches capacity. Network configuration covers firewall rules, IP forwarding, ICMP redirect handling, TCP parameter tuning, and wireless interface controls. File system permissions specify ownership and access rights for critical system files, configuration directories, log directories, and temporary storage locations.
Each OS benchmark recommendation includes a scored designation that indicates whether the recommendation contributes to the benchmark compliance percentage. Scored recommendations are the measurable core: they have a definitive pass or fail state based on a specific configuration value. Unscored recommendations provide additional guidance that may require organizational judgment to implement. The volume of scored recommendations in a single OS benchmark is significant. The RHEL 9 benchmark contains over 250 scored recommendations. The Windows Server 2022 benchmark contains over 300. Verifying compliance against these recommendations manually requires examining individual configuration files, registry values, group policy settings, audit policies, and service states on every system in the inventory. Across an environment with dozens or hundreds of servers running multiple operating system versions, manual verification is not feasible on a continuous basis. Point-in-time assessments capture the configuration state on the day of the check but provide no visibility into drift that occurs between assessments. A server that passes every CIS recommendation on Monday can drift out of compliance by Wednesday when an administrator modifies a configuration to troubleshoot an application issue.
Vanguard executes CIS Benchmark scans against operating systems with full Level 1 and Level 2 profile support. Scans evaluate every scored recommendation in the applicable benchmark, producing a pass or fail result for each with the specific configuration value observed and the expected value defined by the benchmark. Failed recommendations include the exact remediation steps required to bring the system into compliance. Sentinel monitors operating system configurations continuously between scan cycles. When a configuration file is modified, a service state changes, a user account is created or elevated, or a firewall rule is altered, Sentinel evaluates the change against the applicable CIS Benchmark recommendations and flags any drift from the compliant baseline. Drift detection operates in real time, not on the next scheduled scan. The combination of periodic full scans from Vanguard and continuous drift monitoring from Sentinel ensures that OS benchmark compliance is maintained persistently, with every deviation detected, documented, and routed to Citadel's action queue for remediation.
Cloud foundation benchmarks address the account-level and service-level configurations that define the security posture of an entire cloud deployment. Unlike OS benchmarks that target individual systems, cloud foundation benchmarks evaluate the configuration of the cloud platform itself: identity and access management policies, logging and monitoring services, network architecture, storage configurations, and database service settings. The AWS Foundations Benchmark covers IAM password policies, root account usage, CloudTrail configuration across all regions, VPC flow logging, S3 bucket public access settings, security group rules, KMS key rotation, and dozens of additional account-level controls. The Azure Foundations Benchmark covers Azure Active Directory configuration, diagnostic logging, network security group rules, storage account encryption, Key Vault configuration, and Azure Security Center settings. The GCP Foundations Benchmark covers Cloud IAM policies, Cloud Audit Logs configuration, VPC network settings, Cloud Storage bucket permissions, and BigQuery dataset access controls. Each foundation benchmark contains between 50 and 150 recommendations organized by cloud service category.
Cloud configuration drift occurs at a pace that exceeds traditional assessment cycles. Cloud environments change constantly: new resources are provisioned through infrastructure as code pipelines, manual console changes are made during incident response, service defaults are updated by the cloud provider, and organizational growth introduces new accounts, subscriptions, or projects that may not inherit the hardened baseline. A single misconfigured S3 bucket with public access enabled, a CloudTrail logging gap in one region, an overly permissive IAM policy attached to a service role, or a security group rule that allows unrestricted inbound access can create a compliance gap that affects multiple base framework controls simultaneously. Cloud foundation benchmark recommendations are interconnected: an IAM recommendation affects access control, a logging recommendation affects audit and accountability, a network recommendation affects system and communications protection. A single cloud misconfiguration can cascade across multiple control families in the base framework, creating compliance gaps that are disproportionate to the apparent simplicity of the configuration change.
Sentinel monitors cloud configurations continuously through native API integration with AWS, Azure, and GCP. When a cloud resource configuration changes, Sentinel evaluates the new state against every applicable CIS Benchmark recommendation within seconds. If CloudTrail logging is disabled in a region, Sentinel flags the drift against the AWS Foundations Benchmark and maps the impact to the base framework controls that depend on comprehensive audit logging. If a storage bucket's public access block is removed, Sentinel maps the drift to both the cloud foundation benchmark recommendation and the data protection controls in the base framework. Garrison maintains a live inventory of every cloud resource across all connected accounts, subscriptions, and projects, ensuring that new resources are evaluated against applicable benchmarks from the moment they appear. The combination of Sentinel's continuous monitoring and Garrison's resource inventory means that cloud foundation benchmark compliance is not a periodic assessment. It is a persistent state that the platform maintains, with every deviation detected, documented with the exact configuration change that caused the drift, and routed for remediation through Citadel.
Container environments introduce a distinct configuration hardening challenge. The attack surface spans three layers: the container runtime (Docker Engine configuration, daemon settings, storage drivers, network modes, and logging configuration), the orchestration platform (Kubernetes API server settings, kubelet configuration, etcd encryption, RBAC policies, network policies, pod security standards, and admission controllers), and the container images themselves (base image selection, package minimization, user privileges, exposed ports, and embedded secrets). The CIS Docker Benchmark evaluates the Docker daemon configuration, container runtime parameters, Docker security operations, and Docker Swarm configuration if applicable. The CIS Kubernetes Benchmark evaluates control plane components (API server, controller manager, scheduler, etcd), worker node components (kubelet, kube-proxy), policies (RBAC, pod security, network policies), and managed service configurations. The CIS EKS Benchmark extends Kubernetes guidance with AWS-specific controls for EKS cluster configuration, node group settings, and IAM integration. Each benchmark contains recommendations that are specific to the technology layer and version.
Container configuration is inherently dynamic. Pods are created and destroyed in seconds. Deployments scale horizontally based on load. Node pools are autoscaled. Configuration changes propagate through controllers and operators. A Kubernetes admission controller that enforces pod security standards can be disabled by a single API call, removing a security control that affects every workload in the cluster. A Docker daemon configuration change that disables content trust or enables inter-container communication can weaken isolation across the entire runtime environment. Namespace-level network policies that restrict lateral movement between services can be deleted accidentally during a deployment update. The ephemerality and velocity of container environments make point-in-time benchmark assessments insufficient. A cluster that passes every CIS Kubernetes Benchmark recommendation during a scheduled scan may have a non-compliant admission controller configuration ten minutes later when a Helm chart deployment modifies the control plane settings. Container benchmark compliance requires continuous verification at the pace of the container lifecycle.
Vanguard scans container environments against all applicable CIS Benchmarks with coverage across Docker daemon configuration, Kubernetes cluster components, and EKS-specific controls. Scans evaluate the runtime configuration of each layer independently: Docker daemon settings on every node, Kubernetes API server and kubelet parameters on every control plane and worker node, and EKS cluster-level configurations through the AWS API. Vanguard identifies non-compliant configurations with the specific parameter value observed, the benchmark recommendation it violates, and the remediation action required. Sentinel monitors container orchestration events continuously. When a Kubernetes resource is created, modified, or deleted, Sentinel evaluates the change against applicable benchmark recommendations. If a deployment is created without resource limits, Sentinel flags the violation against the CIS Kubernetes Benchmark recommendation for resource management. If a pod security admission controller is modified, Sentinel evaluates the impact across all affected namespaces. Evidence from container scans and continuous monitoring feeds directly into Rampart for compliance scoring against the base framework controls that container hardening supports.
Database and web server configurations are high-value targets because they sit at the intersection of data access and network exposure. Database benchmarks cover authentication enforcement, privilege management, encryption at rest and in transit, audit logging, network access restrictions, and storage engine security parameters. The PostgreSQL benchmark evaluates pg_hba.conf authentication rules, SSL configuration, role privileges, logging parameters, and connection limits. The MySQL benchmark evaluates authentication plugins, privilege tables, SSL requirements, general and slow query logging, and file system permissions on data directories. The MongoDB benchmark evaluates authentication mechanisms, role-based access control, transport encryption, audit logging, and network binding configuration. The Redis benchmark evaluates authentication requirements, network binding, TLS configuration, command renaming for dangerous operations, and persistence configuration. Each database benchmark addresses the specific security characteristics of the technology while mapping to common control objectives: access control, audit and accountability, data protection, and system integrity.
Web server benchmarks address the configuration of the services that handle inbound network traffic and serve content to users and systems. The Apache benchmark evaluates module loading, directory permissions, HTTP header security (X-Frame-Options, Content-Security-Policy, Strict-Transport-Security), SSL/TLS configuration, request limiting, and logging configuration. The Nginx benchmark evaluates worker process permissions, buffer overflow protections, header security, SSL/TLS cipher suite selection, rate limiting, and access logging. The IIS benchmark evaluates application pool identity configuration, request filtering, logging, SSL/TLS settings, authentication methods, and directory browsing restrictions. Web server misconfigurations are among the most commonly exploited entry points: a missing security header, a weak cipher suite, verbose error messages that reveal internal paths, or directory listing enabled on a content directory. Each of these misconfigurations maps to specific CIS Benchmark recommendations with defined pass/fail criteria.
Vanguard scans database and web server configurations against the applicable CIS Benchmark for each technology and version. Database scans connect to the database instance and evaluate configuration parameters, authentication rules, privilege assignments, encryption status, and logging settings against every scored recommendation in the benchmark. Web server scans evaluate the running configuration files, loaded modules, SSL/TLS certificates and cipher suites, security headers on served responses, and file system permissions on configuration and content directories. Each scan produces a detailed result set: every recommendation evaluated, every pass and fail recorded with the observed and expected values, and every failed recommendation linked to the specific remediation action required. Sentinel collects evidence from database and web server configurations continuously, detecting changes to authentication rules, privilege grants, SSL certificate expirations, and configuration file modifications. Results feed into Rampart where database and web server benchmark compliance is scored alongside OS, cloud, and container benchmark results for a unified view of configuration hardening posture.
CIS Benchmark scanning evaluates every scored recommendation in an applicable benchmark against the actual configuration of a target system. Each scan must support both Level 1 and Level 2 profiles across every technology category: operating systems, cloud foundations, containers, databases, and web servers. A thorough scan produces structured results that include the recommendation identifier, the profile level, the assessed configuration value, the expected value per the benchmark specification, and a pass or fail determination. Scan scheduling matters: daily scans suit high-change environments, weekly scans work for stable infrastructure, and custom schedules align with organizational change windows. Each scan execution should be recorded as an immutable event with a timestamp, the benchmark version evaluated, the profile selected, the target system identifier, and the complete result set. Versioned scan results enable comparison between consecutive scans, identifying newly introduced non-compliance, recently remediated findings, and persistent gaps that remain open across multiple cycles.
The gap between scheduled scans is where CIS Benchmark compliance breaks down. A configuration change introduced at 2:00 AM on a Tuesday may violate a benchmark recommendation, but if the next scan runs on Friday, the organization operates in an unknown compliance state for three days. Manual configuration monitoring cannot keep pace with the volume of changes in modern infrastructure. API-driven provisioning, infrastructure-as-code deployments, and automated scaling events all modify system configurations continuously. Without real-time evaluation of these changes against applicable benchmark recommendations, organizations accumulate compliance drift that only surfaces during the next scan cycle. Evidence provenance is another challenge: every piece of evidence must carry the source system, collection timestamp, collection method, the raw configuration value observed, and the benchmark recommendation it applies to. Assessors and auditors need to verify that evidence presented during a compliance review accurately represents the system state at the time it was collected. Evidence without integrity verification and chain of custody metadata fails to meet this standard.
Scan results and continuous evidence from Vanguard and Sentinel feed directly into Rampart for compliance scoring. Rampart maps each CIS Benchmark recommendation to the base framework controls it supports, translating configuration-level results into control-level compliance status. A failed recommendation in the RHEL 9 benchmark maps to specific NIST 800-53 controls (CM-6, CM-7, AC-3, AU-2, and others depending on the recommendation category). When that recommendation is remediated and the next scan or Sentinel detection confirms compliance, Rampart recalculates the affected control scores immediately. The scoring engine distinguishes between Level 1 and Level 2 findings, allowing organizations to prioritize Level 1 compliance as the baseline and track Level 2 progress separately for environments that require defense in depth. Artificer generates narratives from scan results that explain how benchmark compliance satisfies specific base framework controls, providing the implementation detail that assessors require when reviewing evidence packages.
CIS Benchmarks do not exist in isolation. They implement CIS Controls safeguards, which map to NIST 800-53 controls through published, auditable cross-references. The traceability chain is explicit. A CIS Benchmark recommendation to configure minimum password length maps to CIS Controls Safeguard 5.2 (Use Unique Passwords). Safeguard 5.2 maps to NIST 800-53 control IA-5 (Authenticator Management). IA-5 is part of the CMMC Level 2 practice set through NIST 800-171. IA-5 is required in FedRAMP Moderate and High baselines. IA-5 maps to SOC 2 Common Criteria CC6.1. A single benchmark recommendation, when traced through the derivation chain, advances compliance across every framework that requires authenticator management. This structural relationship means that CIS Benchmark compliance is not a standalone activity. Every recommendation that passes provides evidence for one or more controls in the base framework. Every recommendation that fails represents a measurable gap in the base framework compliance posture. The benchmark is the implementation specification. The base framework is the requirement. The mapping between them is deterministic.
The practical impact of this relationship is substantial for organizations pursuing multiple compliance objectives. An organization preparing for CMMC Level 2 assessment that also maintains a FedRAMP authorization and undergoes annual SOC 2 audits faces overlapping configuration requirements across all three frameworks. CMMC practice CM.L2-3.4.2 (Establish and enforce security configuration settings) requires configuration baselines. FedRAMP control CM-6 requires the same with FedRAMP-specific parameters. SOC 2 CC8.1 requires change management processes that include configuration standards. CIS Benchmark compliance provides the specific configuration evidence that satisfies all three requirements simultaneously. The benchmark scan result proving that an RHEL server meets every Level 1 recommendation serves as evidence for CM.L2-3.4.2 in the CMMC assessment, CM-6 in the FedRAMP annual assessment, and CC8.1 in the SOC 2 audit. Without the benchmark overlay, the organization must produce separate configuration evidence for each framework, often using different formats, different collection methods, and different verification criteria. With the benchmark overlay, one scan produces one evidence set that maps to every applicable control across every active framework.
Rampart's cross-reference engine resolves the full derivation chain from each CIS Benchmark recommendation through CIS Controls safeguards to every mapped NIST 800-53 control, and from there to every derived framework in the organization's compliance portfolio. When a benchmark scan completes, Rampart does not just update the benchmark compliance score. It recalculates the compliance posture for every base framework control that the benchmark recommendations support. The cross-reference is bidirectional: from the framework control view, assessors can drill down to see which CIS Benchmark recommendations provide evidence for that control. From the benchmark view, operators can see which framework controls benefit from remediating a specific failed recommendation. Artificer uses this cross-reference data to perform gap analysis that identifies where benchmark remediation delivers the greatest cross-framework impact. A single failed recommendation that affects controls in three frameworks is prioritized above a failed recommendation that affects one control in one framework. This prioritization ensures that remediation effort produces maximum compliance advancement across the entire portfolio, not just the benchmark score in isolation.
Something is being forged.
The full platform is under active development. Reach out to learn more or get early access.