CIS Controls v8. Prioritized Cyber Defense. Measured Continuously.
CIS Controls v8 Platform
18 control families. 153 safeguards. Three Implementation Groups that prioritize cyber defense from essential hygiene to advanced operations. Continuous assessment from connected infrastructure. Map CIS Controls to NIST 800-53, NIST CSF 2.0, CMMC, and every derived framework. Evidence from running systems, not annual questionnaires.
CIS Controls v8
Prioritized actions for cyber defense. Measured against your actual infrastructure.
CIS Controls v8 organizes 153 safeguards into 18 control families, prioritized across three Implementation Groups. Most organizations adopt CIS Controls as a practical starting point for cyber defense, then discover that maintaining continuous assessment against 153 safeguards requires the same infrastructure connection that larger frameworks demand. Redoubt Forge treats CIS Controls as a living assessment: safeguards scored from observed posture, evidence collected from connected systems, and cross-framework mappings computed in real time.
The Center for Internet Security (CIS) Controls version 8, published in May 2021, represent the current iteration of a community-developed, consensus-based framework for prioritized cyber defense. The framework organizes 153 individual safeguards into 18 control families, each targeting a specific category of defensive capability. Unlike regulatory frameworks that mandate compliance through contractual or legal obligation, CIS Controls are adopted voluntarily as a practical set of actions that reduce the most common attack vectors. The framework's design philosophy is fundamentally prioritized: safeguards are ordered by defensive impact, and the Implementation Group structure ensures that organizations address the highest-impact actions first regardless of their size or maturity. Version 8 reorganized the controls around activities rather than the devices that perform them, reflecting the shift toward cloud-native, hybrid, and remote work environments where the perimeter is defined by identity and data flow rather than physical network boundaries. The framework applies to organizations of every size, sector, and geography. There is no certification body and no mandatory assessment cycle. Adoption is driven by the framework's practical value as a defensive roadmap.
CIS Controls v8 structures its 153 safeguards into three nested Implementation Groups (IGs) that define a progressive adoption path. IG1 contains 56 safeguards representing essential cyber hygiene: the minimum set of defensive actions that every organization should implement regardless of resources or threat profile. IG2 adds 74 safeguards on top of IG1, totaling 130 safeguards appropriate for enterprises with dedicated IT staff managing infrastructure that stores or processes sensitive data. IG3 adds the remaining 23 safeguards, bringing the total to all 153, targeting organizations that face sophisticated adversaries and must defend high-value assets or critical infrastructure. The nesting is cumulative: IG1 is a proper subset of IG2, and IG2 is a proper subset of IG3. An organization pursuing IG2 implements all 56 IG1 safeguards plus the 74 additional IG2 safeguards. This structure eliminates the common failure mode of attempting every control simultaneously. Organizations start with the 56 safeguards that deliver the greatest defensive return per unit of effort, then expand systematically as their program matures.
The CIS Controls trace their lineage to the SANS Top 20 Critical Security Controls, originally developed in 2008 by a consortium of government agencies, security practitioners, and private sector experts who identified the specific defensive actions that would have prevented the most common real-world attacks observed at that time. The Center for Internet Security assumed stewardship of the framework and has published successive revisions that incorporate evolving threat intelligence, new technology patterns, and community feedback from thousands of implementing organizations worldwide. CIS Controls are not government-mandated, but their adoption is widespread across industries including healthcare, financial services, education, manufacturing, and government agencies at the state and local level. Several regulatory frameworks reference CIS Controls explicitly: the NIST Cybersecurity Framework maps to CIS Controls at the subcategory level, and many insurance underwriters accept CIS Controls implementation as evidence of due diligence for cyber liability coverage. The framework serves as a practical starting point for organizations that need immediate defensive improvement and as a structured complement to larger regulatory frameworks like NIST 800-53 or ISO 27001.
Organizations adopt CIS Controls because they need a prioritized starting point. Large frameworks like NIST 800-53 contain over 1,000 controls across 20 families with three baselines. The volume is appropriate for federal systems and defense contractors, but organizations without dedicated compliance teams often stall before completing their first assessment. CIS Controls solve this onboarding problem by distilling defensive priorities into 153 actionable safeguards with clear Implementation Group guidance. The framework tells you what to do first, second, and third. It tells you which safeguards matter most given your organizational profile. It provides a structured path from essential hygiene through enterprise defense to advanced operations. This prioritization is the framework's primary value. But prioritization alone does not produce continuous defensive improvement. Organizations that adopt CIS Controls still face the same operational challenge that plagues every compliance framework: the gap between documented intent and actual infrastructure state.
The assessment gap materializes in a predictable pattern. An organization evaluates its 56 IG1 safeguards using the CIS Controls Self-Assessment Tool (CIS-CSAT) or an internal spreadsheet. Each safeguard receives a maturity score based on interviews, documentation review, and spot-checks of configurations. The assessment captures a point-in-time snapshot that reflects the environment as it existed during the evaluation period. Within weeks, infrastructure changes. New assets are deployed without updating the asset inventory (Control 1). Software is installed without approval processes (Control 2). Configurations drift from hardened baselines (Control 4). Access permissions expand through operational necessity and are never reviewed (Control 5 and Control 6). The assessment document describes an environment that no longer exists. The organization believes it has achieved IG1 compliance based on the last assessment. The actual environment has regressed across multiple safeguards. Without continuous connection between safeguard assessment and infrastructure state, the CIS Controls become another static checklist disconnected from operational reality.
The consequence of this disconnect is not abstract. Organizations that report CIS Controls maturity to boards, insurers, or partners based on stale assessments are misrepresenting their defensive posture. Insurance underwriters who accept CIS Controls implementation as evidence of due diligence rely on the accuracy of that representation. Partners who evaluate supply chain risk based on CIS Controls maturity scores trust that those scores reflect current conditions. Board members who receive quarterly security posture updates based on CIS Controls assessments make risk acceptance decisions on outdated information. When a breach occurs and the post-incident analysis reveals that the organization's actual posture had degraded well below its reported maturity level, the consequences extend beyond the breach itself. The gap between reported and actual posture becomes a liability: evidence that the organization knew what controls to implement, documented their implementation, and then failed to maintain them. CIS Controls provide the roadmap. Connecting that roadmap to running infrastructure is what transforms it from a documentation exercise into a functioning defense program.
CIS Controls v8 organizes its 153 safeguards into 18 control families, each addressing a distinct defensive category. The complete list: Control 1: Inventory and Control of Enterprise Assets. Control 2: Inventory and Control of Software Assets. Control 3: Data Protection. Control 4: Secure Configuration of Enterprise Assets and Software. Control 5: Account Management. Control 6: Access Control Management. Control 7: Continuous Vulnerability Management. Control 8: Audit Log Management. Control 9: Email and Web Browser Protections. Control 10: Malware Defenses. Control 11: Data Recovery. Control 12: Network Infrastructure Management. Control 13: Network Monitoring and Defense. Control 14: Security Awareness and Skills Training. Control 15: Service Provider Management. Control 16: Application Software Security. Control 17: Incident Response Management. Control 18: Penetration Testing. The numbering is deliberate. Controls 1 and 2 (asset and software inventory) come first because you cannot secure what you have not identified. Controls 3 through 6 address data protection, configuration, and access. Controls 7 through 11 cover operational security capabilities. Controls 12 through 18 address network defense, training, application security, incident response, and validation.
The ordering of the 18 controls reflects a foundational principle: inventory precedes protection, protection precedes detection, and detection precedes response. Control 1 (enterprise asset inventory) and Control 2 (software asset inventory) establish the baseline of what exists in the environment. Without a complete and current inventory, every subsequent control operates on incomplete information. Secure configuration (Control 4) depends on knowing which assets require hardening. Vulnerability management (Control 7) depends on knowing which software is deployed. Access control (Control 6) depends on knowing which accounts exist and which systems they access. Each control family contains between 3 and 12 individual safeguards, with each safeguard assigned to one or more Implementation Groups. The safeguards within each control family progress from fundamental actions (typically IG1) through enterprise-grade implementations (IG2) to advanced defensive measures (IG3). Control 4, for example, starts with establishing and maintaining a secure configuration process (IG1, Safeguard 4.1), progresses through enforcing automatic device lockout (IG2, Safeguard 4.3), and extends to implementing application-layer filtering for network devices (IG3, Safeguard 4.10).
Rampart maps all 153 safeguards with per-safeguard scoring across the same three dimensions applied to every framework in the platform: defense effectiveness (is the safeguard actually operating in your environment), evidence coverage (what artifacts prove it), and evidence freshness (how current is that proof). Each safeguard is displayed within its parent control family, with Implementation Group indicators showing which IGs require it. The scoring is not binary. A safeguard that is partially implemented with current evidence scores differently from one that is fully implemented with stale evidence. Rampart computes aggregate scores at every level: per-safeguard, per-control family, per-Implementation Group, and overall. The control family view shows which of the 18 categories are strongest and which require attention. The Implementation Group view shows your IG1 readiness percentage distinct from your IG2 and IG3 readiness. This layered scoring structure allows organizations to track progress at the granularity that matches their current maturity stage: an organization pursuing IG1 focuses on 56 safeguards, while an organization pursuing IG3 sees all 153 with clear indicators of which IGs are fully satisfied and which have remaining gaps.
Implementation Group 1 defines essential cyber hygiene through 56 safeguards that every organization should implement regardless of size, sector, or threat profile. IG1 targets organizations with limited IT resources and cybersecurity expertise: small businesses, nonprofit organizations, local government agencies, and any entity that needs foundational defenses without the overhead of a full enterprise security program. The 56 IG1 safeguards cover the actions that prevent the most common attacks: maintaining an asset inventory, managing software installations, establishing secure configurations, controlling account access, collecting audit logs, protecting against malware, and backing up data. IG1 is not a reduced or simplified version of the full framework. It is a deliberately curated subset selected because these 56 safeguards deliver the highest defensive value per unit of implementation effort. CIS research indicates that IG1 implementation mitigates the majority of the most common attack techniques cataloged in the MITRE ATT&CK framework. For many organizations, achieving full IG1 compliance represents a more meaningful security improvement than partially implementing a larger framework.
Implementation Group 2 adds 74 safeguards to the IG1 baseline, bringing the cumulative total to 130 safeguards. IG2 targets enterprises with dedicated IT staff, multiple departments handling sensitive data, and regulatory obligations that require more rigorous security controls. The additional 74 safeguards extend each control family with enterprise-grade capabilities: centralized log management with correlation and analysis (Control 8), network-based intrusion detection (Control 13), security awareness training programs with role-specific content (Control 14), formal incident response procedures with defined roles and communication plans (Control 17), and application security testing integrated into the development lifecycle (Control 16). IG2 organizations typically operate mixed infrastructure environments spanning on-premises data centers, cloud deployments, and remote endpoints. The safeguards added at IG2 address the complexity that comes with scale: more assets to inventory, more configurations to harden, more accounts to manage, more data flows to protect, and more attack surface to monitor. Implementation Group 3 adds the final 23 safeguards for organizations facing sophisticated adversaries, operating critical infrastructure, or defending high-value assets. IG3 includes penetration testing programs (Control 18), advanced network monitoring with protocol-level analysis (Control 13), application-layer filtering (Control 4), and security architecture reviews. The full 153 safeguards represent the complete CIS Controls defensive posture.
Rampart assigns your target Implementation Group based on organizational profile data collected during system registration. Artificer guides the profiling process through targeted questions: How many employees does the organization have? What types of data do your systems process? Do you have dedicated IT security staff? What regulatory requirements apply to your operations? Are you a target for nation-state or advanced persistent threat actors? The answers map to CIS's own guidance for Implementation Group selection. An organization with 50 employees, no dedicated security team, and no regulatory data handling obligations profiles as IG1. An enterprise with 500 employees, a security operations team, and contractual obligations to protect customer data profiles as IG2. A defense contractor or critical infrastructure operator profiles as IG3. Once the target IG is set, Rampart filters the assessment to show only the safeguards that apply to your target group. An IG1 organization sees 56 safeguards, not 153. This prevents the overwhelm that causes organizations to abandon the framework before they start. As the organization matures and decides to pursue a higher Implementation Group, expanding the assessment scope is a configuration change that reveals the additional safeguards with their current scoring state already computed from connected infrastructure data.
The CIS Controls Self-Assessment Tool (CIS-CSAT) methodology evaluates each safeguard across multiple dimensions of implementation maturity. For every safeguard, the assessment must determine whether the organization has a defined policy or process, whether that policy is implemented in practice, whether the implementation is automated where applicable, whether the implementation is reported and measured, and whether it is continuously improved based on metrics. This five-level maturity model transforms safeguard assessment from a binary pass/fail into a graduated evaluation that reveals both the depth and durability of each implementation. A safeguard with a defined policy but no enforcement mechanism scores differently from one with automated enforcement and continuous measurement. The maturity model also reveals which safeguards are fragile: a safeguard that depends entirely on manual process execution is one staffing change away from failure. CIS-CSAT provides the methodology, but executing that methodology against 153 safeguards requires evidence that connects policy intent to operational reality across every system in scope.
Executing the CIS-CSAT maturity model against 153 safeguards exposes the central challenge of CIS Controls assessment: collecting evidence that connects policy intent to operational reality at scale. For Safeguard 1.1 (Establish and Maintain Detailed Enterprise Asset Inventory), you must prove not only that an inventory process exists, but that the inventory reflects every asset currently deployed, that discrepancies between the inventory and the actual environment are identified and resolved, and that accuracy is measured over time. Multiply that across all 153 safeguards and the evidence burden becomes staggering. Manual evidence collection means your team is pulling configuration exports from cloud consoles, cross-referencing spreadsheets of deployed assets against declared inventories, screenshotting access review completions, and compiling vulnerability scan outputs into formats that map back to individual safeguards. Keeping pace with 153 safeguards across three Implementation Groups requires continuous evidence collection, not quarterly snapshots. Most organizations struggle to measure their actual IG maturity because the evidence required to distinguish between "policy defined" and "policy enforced and measured" demands infrastructure-level telemetry that manual processes cannot reliably produce.
Rampart scores every safeguard across three independent dimensions that update continuously as new evidence arrives. Defense effectiveness measures whether the safeguard's intended protection is actually functioning: are audit logs being collected (Control 8), are backups completing successfully (Control 11), are vulnerability scans running on schedule (Control 7)? Evidence coverage measures what artifacts exist to demonstrate the safeguard's implementation: configuration snapshots, scan results from Vanguard, policy documents, training records, incident response test results, access review logs. Evidence freshness measures how recently that proof was generated or verified: a configuration snapshot from yesterday carries more weight than one from six months ago. Artificer guides the assessment process for safeguards that require human judgment or organizational context. For Safeguard 14.1 (Establish and Maintain a Security Awareness Program), Artificer asks: Does your organization conduct security awareness training? How frequently? What topics are covered? Are completion records maintained? The answers combine with any uploaded training records to produce the safeguard's score. Technical safeguards receive automated scoring from infrastructure data. Process and policy safeguards receive guided scoring from Artificer's structured questions. Both feed the same three-dimensional scoring model in Rampart.
Remediation sequence matters. The Implementation Group structure already provides a macro-level prioritization: IG1 safeguards first, then IG2, then IG3. Within each Implementation Group, the control numbering provides a secondary prioritization: inventory (Controls 1-2) before protection (Controls 3-6) before operational security (Controls 7-11) before advanced defense (Controls 12-18). But effective remediation requires a third level of prioritization that accounts for the specific gaps in your environment, the dependencies between safeguards, and the resources available to close them. Some safeguards have prerequisites: you cannot implement Safeguard 4.2 (Establish and Maintain a Secure Configuration Process for Network Infrastructure) without first satisfying Safeguard 1.1 (Establish and Maintain Detailed Enterprise Asset Inventory), because you cannot configure assets you have not identified. Some safeguards enable multiple downstream improvements: implementing centralized log collection (Safeguard 8.2) simultaneously advances your readiness for network monitoring (Control 13), incident response (Control 17), and audit capabilities required by cross-mapped frameworks. Remediation is not a flat list sorted by severity. It is a dependency-aware sequence that maximizes defensive improvement per unit of effort.
Dependency chains between safeguards create natural implementation sequences that must be respected. Control 1 (asset inventory) feeds Control 2 (software inventory), which feeds Control 7 (vulnerability management): you cannot scan for vulnerabilities in software you have not inventoried on assets you have not discovered. Control 5 (account management) feeds Control 6 (access control management): you cannot enforce least-privilege access without first establishing account lifecycle processes. Control 8 (audit log management) feeds Control 13 (network monitoring and defense): network monitoring depends on log collection infrastructure being operational. These dependencies are not suggestions. They are structural requirements. An organization that attempts to implement network-based intrusion detection (Safeguard 13.3, IG2) without first establishing centralized log management (Safeguard 8.2, IG1) will deploy detection capabilities with no infrastructure to collect, store, or analyze the alerts they generate. The remediation plan must respect these dependencies, sequence work accordingly, and track prerequisite completion as a gating condition for downstream safeguards.
Citadel's action queue ranks every open gap by posture impact: how many safeguards does closing this gap advance, and how many cross-framework controls does it satisfy simultaneously? Artificer recommends remediation paths that respect dependency chains, account for your target Implementation Group, and identify opportunities where a single infrastructure change satisfies multiple safeguards across multiple control families. Armory provides hardened infrastructure-as-code modules that satisfy CIS Controls safeguards from the first deployment. An asset discovery module that satisfies Safeguards 1.1 and 1.2 by deploying continuous infrastructure enumeration with automated inventory reconciliation. A logging module that satisfies Safeguards 8.2, 8.3, and 8.5 by deploying centralized log collection with defined retention periods, tamper-evident storage, and access controls on log data. A configuration management module that satisfies Safeguards 4.1 and 4.2 by establishing hardened baselines for compute and network resources with automated drift detection. Deploy the module, and the safeguards are satisfied by design. The infrastructure IS the evidence. As remediation actions complete, Sentinel detects the changes, collects new evidence, and Rampart re-evaluates affected safeguard scores in real time.
CIS Controls are explicitly designed for continuous assessment. The framework does not prescribe an annual review cycle or a periodic audit cadence. It prescribes ongoing defensive operations that maintain security posture as infrastructure, threats, and organizational context evolve. Safeguard 1.1 requires maintaining a detailed enterprise asset inventory; "maintaining" means the inventory reflects reality at all times, not once per quarter. Safeguard 7.1 requires establishing and maintaining a vulnerability management process; "maintaining" means vulnerabilities are identified and remediated on an ongoing basis, not during a scheduled scan window. Safeguard 8.1 requires establishing and maintaining an audit log management process; "maintaining" means logs are collected continuously, not reviewed retrospectively after an incident. The language throughout the framework uses "establish and maintain" as a recurring pattern, signaling that implementation is the beginning of the obligation and continuous operation is the substance. Organizations that treat CIS Controls assessment as a periodic exercise violate the framework's own design intent and create the assessment gap described in the problem section: a documented posture that diverges from reality between evaluation cycles.
The operational challenge of continuous CIS Controls monitoring is that infrastructure never holds still. Configuration drift is constant: a hardened SSH configuration gets overridden during a troubleshooting session, an audit log pipeline silently stops collecting after a service update, a backup schedule is disabled and no one notices until the next quarterly review. New assets appear between assessment cycles as teams deploy compute instances, containers, and managed services without updating the asset inventory required by Control 1. Software installations happen outside change management windows, creating untracked entries that Control 2 requires you to account for. Access policies accumulate exceptions that erode the least-privilege posture Control 6 demands. Each of these changes degrades specific safeguard scores, but without continuous infrastructure observation, the degradation is invisible until the next scheduled evaluation. The result is a compliance posture that looks solid on the day of assessment and deteriorates steadily afterward. Organizations relying on periodic assessment cycles discover gaps weeks or months after they occur, when remediation is more expensive and the window of exposure has already caused real risk.
Vanguard scan results feed safeguard scoring continuously. Vulnerability scans map directly to Control 7 (Continuous Vulnerability Management): each discovered vulnerability is associated with the affected assets, the affected software, and the specific safeguards it impacts. Application security scans map to Control 16 (Application Software Security): static analysis findings, dependency vulnerabilities, and configuration weaknesses in application code are connected to the relevant IG2 and IG3 safeguards. Container scans map to safeguards that address secure configuration and software inventory within containerized environments. Each scan result becomes evidence that feeds the three-dimensional scoring model. A vulnerability scan that finds zero critical findings on all in-scope assets is positive evidence for Safeguard 7.4 (Perform Automated Application Patch Management). A scan that reveals unpatched critical vulnerabilities is negative evidence that reduces the safeguard's defense effectiveness score. The scoring is continuous, not episodic. Every scan run updates the affected safeguard scores, the control family aggregates, the Implementation Group readiness percentages, and the overall CIS Controls maturity profile.
CIS Benchmarks are the implementation-level companion to CIS Controls. Where CIS Controls define what to do (maintain secure configurations, manage accounts, protect data), CIS Benchmarks define exactly how to do it for specific technologies. A CIS Benchmark for a Linux distribution specifies the exact kernel parameters, file permissions, service configurations, authentication settings, and audit rules that constitute a hardened baseline for that operating system. A CIS Benchmark for a cloud platform specifies the exact IAM policies, network configurations, logging settings, encryption requirements, and monitoring configurations that constitute a secure deployment. CIS publishes over 100 Benchmarks covering operating systems (RHEL, Ubuntu, Windows, Amazon Linux), cloud platforms (AWS, Azure, GCP), containers (Docker, Kubernetes, EKS), databases (PostgreSQL, MySQL, MongoDB, Redis), web servers (Apache, Nginx, IIS), and network devices. Each Benchmark recommendation is mapped to the CIS Controls safeguards it supports, creating a direct link between the high-level defensive objective and the specific configuration change required to achieve it.
The connection between CIS Benchmarks and CIS Controls is structural, not advisory. Benchmark recommendation 5.2.4 in the CIS RHEL 9 Benchmark (Ensure SSH MaxAuthTries is set to 4 or less) maps to CIS Control 4 (Secure Configuration) and CIS Control 6 (Access Control Management). If your RHEL 9 instances pass this Benchmark check, you have evidence supporting Safeguards 4.1, 4.8, and 6.5. If they fail, those safeguards lose evidence coverage and defense effectiveness scores decrease. This mapping is granular: each individual Benchmark recommendation connects to one or more specific CIS Controls safeguards. A comprehensive Benchmark scan against your deployed infrastructure produces hundreds of individual pass/fail results, each contributing evidence to the CIS Controls assessment. The Benchmark scan does not just tell you whether your configurations are hardened. It tells you which CIS Controls safeguards your configurations support and which ones they undermine. This is the mechanism that transforms CIS Controls from a policy framework into a measurable defensive posture: Benchmark results provide the per-configuration evidence that populates per-safeguard scoring.
Vanguard scans infrastructure against CIS Benchmarks and feeds the results directly into CIS Controls scoring in Rampart. When Vanguard runs a CIS Benchmark scan against your Linux fleet, every individual recommendation result (pass, fail, or not applicable) is mapped to the CIS Controls safeguards it supports through the CIS-published mapping tables. Rampart aggregates these results: if 95% of your fleet passes all Control 4 related Benchmark checks, your Safeguard 4.1 defense effectiveness score reflects that coverage. If 3 instances fail the SSH MaxAuthTries check, Rampart shows Control 6 safeguard scores reduced for the affected systems. Sentinel detects when a previously passing Benchmark check begins failing due to configuration drift and maps that drift event to the affected CIS Controls safeguards. The feedback loop is continuous: Vanguard scans produce Benchmark results, Benchmark results feed safeguard scores in Rampart, Sentinel monitors for drift between scans, and drift events trigger re-evaluation. Organizations that implement both CIS Controls and CIS Benchmarks through the platform achieve a level of assessment precision that manual evaluation cannot match: every configuration decision on every asset contributes measurable evidence to the overarching CIS Controls posture.
CIS publishes official mappings between CIS Controls v8 and both NIST 800-53 rev5 and NIST CSF 2.0. These are not approximate alignments or interpretive cross-walks. They are authoritative, CIS-maintained mapping tables that connect each of the 153 safeguards to specific NIST 800-53 controls and specific NIST CSF subcategories. Safeguard 1.1 (Establish and Maintain Detailed Enterprise Asset Inventory) maps to NIST 800-53 CM-8 (System Component Inventory) and NIST CSF ID.AM-01 (Inventories of hardware managed by the organization are maintained). Safeguard 8.2 (Collect Audit Logs) maps to NIST 800-53 AU-2 (Event Logging), AU-3 (Content of Audit Records), AU-6 (Audit Record Review, Analysis, and Reporting), and AU-12 (Audit Record Generation), plus NIST CSF DE.CM-09. These published mappings create deterministic cross-framework relationships. Work done to satisfy a CIS Controls safeguard simultaneously generates evidence for the mapped NIST 800-53 controls and NIST CSF subcategories. The investment compounds rather than requiring separate evidence collection for each framework.
The IG1 subset of 56 safeguards maps to a significant portion of the NIST Cybersecurity Framework's subcategories. An organization that achieves full IG1 implementation has addressed safeguards that map across all five NIST CSF functions: Identify, Protect, Detect, Respond, and Recover. This makes CIS Controls IG1 an effective on-ramp to NIST CSF adoption. Organizations that start with CIS Controls as their primary framework and later need to demonstrate NIST CSF alignment (for regulatory requirements, insurance applications, or partner assessments) discover that their IG1 work already covers substantial ground. The same principle applies in the other direction: organizations currently assessed against NIST CSF that adopt CIS Controls find that many safeguards are already satisfied by their existing CSF implementations. The bidirectional mapping eliminates redundant assessment effort and prevents the common failure mode of maintaining parallel compliance programs that evaluate the same defensive capabilities through different lenses without recognizing the overlap.
Beyond NIST, CIS Controls work builds readiness for CMMC, FedRAMP, SOC 2, and ISO 27001 through five mapping strategies that Rampart maintains. First, native CIS-to-NIST mappings: the CIS-published tables that connect safeguards directly to NIST 800-53 and NIST CSF. Second, NIST 800-53 derivation chain tracing: since CMMC Level 2 derives from NIST 800-171, which derives from NIST 800-53, and FedRAMP baselines are selections from NIST 800-53, CIS Controls that map to 800-53 controls automatically advance readiness for every framework in the NIST derivation chain. Third, NIST CSF 2.0 bridging: using the Cybersecurity Framework's function/category/subcategory structure as an intermediary between CIS Controls and frameworks that map to CSF but not directly to CIS. Fourth, published cross-walks from authoritative sources: AICPA for SOC 2 Trust Service Criteria, ISO for 27001 Annex A controls, and CIS for supplementary mappings. Fifth, AI-suggested mappings from Artificer for framework relationships that lack published authoritative cross-walks, always requiring human confirmation before activation. As you satisfy CIS Controls safeguards, Rampart computes your readiness percentage for every other framework in the catalog. The marginal effort to add each subsequent framework decreases because the control overlap compounds through these five mapping strategies. One security posture. Every framework computed.
Something is being forged.
The full platform is under active development. Reach out to learn more or get early access.