CMMC Level 2 Compliance. Forged from Security Posture.

CMMC Compliance Platform

110 practices derived from NIST 800-171 rev2. Continuous evidence collection from connected infrastructure. SPRS score computed from live assessment data. Immutable compliance proofs for your C3PAO. Level 1 self-assessment and Level 3 DIBCAC assessment supported. CMMC Phase 2 enforcement begins November 2026.

Security posture generates compliance proofs. Not the other way around.

CMMC Level 2 requires 110 practices derived from NIST 800-171 rev2, assessed by a Certified Third-Party Assessment Organization. Most organizations pursue certification backward: spreadsheets first, evidence collection second, infrastructure alignment last. Redoubt Forge inverts that sequence. Start with actual defenses; the platform observes your security posture, maps it to every CMMC practice, computes your SPRS score from live data, and generates immutable compliance proofs for your C3PAO.

01
What Is CMMC
The Department of Defense's Verification Framework for the Defense Industrial Base.

The Cybersecurity Maturity Model Certification (CMMC) exists because self-attestation failed. Under DFARS 252.204-7012, defense contractors were required to implement NIST 800-171 and report their compliance status. Many reported compliance without implementing the required security controls. The Department of Defense had no mechanism to verify those claims, and the gap between reported and actual security posture across the Defense Industrial Base became a national security concern. CMMC closes that gap by adding third-party verification. The program establishes three certification levels, each building on the previous. Level 1 covers 17 basic cyber hygiene practices for protecting Federal Contract Information (FCI), validated through annual self-assessment with a senior official affirmation. Level 2 maps directly to all 110 security requirements from NIST 800-171 rev2, required for any system that processes, stores, or transmits Controlled Unclassified Information (CUI), and validated by a C3PAO accredited through the Cyber AB (formerly the CMMC Accreditation Body). Level 3 adds 24 enhanced security requirements derived from NIST 800-172, assessed by the Defense Industrial Base Cybersecurity Assessment Center (DIBCAC) for the most sensitive programs involving critical national security information.

Phase 2 enforcement begins November 2026. At that point, DoD contracts will include CMMC certification as a condition of award. Contracts involving only FCI will require Level 1. Contracts involving CUI will require Level 2 or Level 3 depending on program sensitivity. The rulemaking process that governs this timeline flows through 48 CFR (the Federal Acquisition Regulation) and DFARS (the Defense Federal Acquisition Regulation Supplement). The final rule published in October 2024 codified CMMC into the acquisition process, meaning contracting officers will specify the required CMMC level in the solicitation. Organizations that lack the required certification at the time of contract award will be ineligible. There is no grace period. There is no provisional status that satisfies a contract requirement. Certification must be achieved before proposal submission for contracts that require it.

The structural relationships between CMMC and the broader NIST framework ecosystem are deterministic and consequential. CMMC Level 2 IS NIST 800-171 rev2. The 110 practices in Level 2 are not "based on" or "aligned with" NIST 800-171; they are the same 110 security requirements, reorganized into CMMC's domain and practice structure. NIST 800-171 itself derives from the NIST 800-53 Moderate baseline, which means every CMMC practice traces back to a specific 800-53 control. Organizations must report their implementation status through the Supplier Performance Risk System (SPRS), which computes a single score between -203 and 110. Full implementation scores 110. Each unimplemented practice carries a weighted deduction based on the security significance of the gap. Not all practices carry equal weight: failures in access control and system integrity carry larger deductions than failures in awareness and training. This score is posted to the SPRS portal, referenced in source selection, and increasingly treated as a gating criterion for contract award. Reporting an inaccurate SPRS score is not a compliance gap; it is a False Claims Act exposure. The Department of Justice has made cybersecurity fraud in government contracting an explicit enforcement priority, with qui tam provisions enabling whistleblower-initiated lawsuits that carry treble damages and potential debarment. This is not a compliance exercise. It is a legal obligation with contractual, financial, and criminal consequences.

02
The Problem
Why Traditional CMMC Preparation Fails. What It Costs When It Does.

Most organizations pursue CMMC backward. They start with the 110 practices in a spreadsheet. They assign each practice to a team member. They schedule quarterly evidence collection cycles where engineers take screenshots of configurations, export access control lists, and manually upload artifacts to a shared drive or GRC platform. The System Security Plan is written as a narrative document, reviewed by a compliance manager, signed by an authorizing official, and filed. It describes the system as it existed on the day it was written. The evidence collection describes the environment as it existed on the day the screenshots were taken. Between those collection cycles, the actual environment continues to change. This approach produces a compliance artifact that is disconnected from the running system it claims to describe.

Evidence decay is not a theoretical risk. It is a mechanical certainty. Infrastructure drifts within days of evidence collection. Security groups are modified to accommodate new application requirements. IAM policies are updated. New compute instances are launched. Storage buckets are created for ad hoc projects. Services are deployed without updating the SSP or notifying the compliance team. The authorization boundary described in the plan no longer matches the running environment. Network diagrams reflect last quarter's architecture, not this week's. Access control matrices reference roles that have been renamed or decomposed. By the time the C3PAO arrives, the evidence package describes a system that no longer exists in the form documented. The organization enters a scramble phase: re-collecting evidence, re-verifying configurations, re-interviewing personnel, and re-narrating practices under time pressure.

The cost of failure extends well beyond the assessment fee. When an organization fails a C3PAO assessment, the immediate consequence is a lost contract opportunity. Competitors who achieved certification receive the award. The failed organization must remediate, re-engage the C3PAO (or a different one), and attempt certification again. Months of preparation are wasted because the previous cycle's evidence went stale before it could be verified. Delayed certification timelines cascade through business development pipelines: proposals cannot be submitted, teaming agreements cannot be fulfilled, and subcontract obligations cannot be met. Beyond the direct business impact, inaccurate SPRS score reporting carries False Claims Act exposure. Organizations that reported a score of 90 while their actual posture scores 45 face qui tam lawsuits, treble damages, and debarment proceedings. The Department of Justice Civil Cyber-Fraud Initiative has made clear that cybersecurity misrepresentation in government contracting is an enforcement priority. The C3PAO assessment is not a documentation review. It is a verification of actual security posture. Organizations that treat it as paperwork discover this distinction at the worst possible time.

03
Step 1: Scope Your System
Define the Assessment Boundary. Every Component That Touches CUI.

Scoping defines the assessment boundary: which systems, environments, and data flows the C3PAO will evaluate. This requires identifying where Controlled Unclassified Information (CUI) is processed, stored, and transmitted. The distinction between FCI and CUI determines whether a system requires Level 1 or Level 2 certification. The authorization boundary must account for every component that touches CUI: compute resources, storage systems, network paths, identity providers, external integrations, encryption key management services, and enclave boundaries. Every database that stores CUI, every application that processes it, every network segment that carries it, every user who accesses it, and every external system that receives it must be identified and included. Scoping errors are not isolated mistakes. They propagate through the entire assessment. An authorization boundary that omits a CUI-handling component means that component receives no security scrutiny, no evidence collection, and no C3PAO verification.

Common scoping challenges fall into two categories. Overly broad boundaries inflate cost and complexity by including systems that never touch CUI, subjecting them to all 110 practices unnecessarily. Every additional system in scope adds evidence collection burden, monitoring overhead, and assessment time. Overly narrow boundaries are more dangerous: they leave CUI-handling systems unprotected and unassessed, creating both a security gap and a potential False Claims Act issue if the organization certifies without covering its full CUI footprint. Shared services create particular complexity. Identity providers, centralized logging infrastructure, DNS services, certificate authorities, and network devices that route traffic between CUI and non-CUI segments are in scope even if they do not store CUI directly. They provide security services to CUI-handling systems, which makes them part of the authorization boundary. Cloud deployments add another layer: the shared responsibility model means the organization must document which controls the cloud provider satisfies and which remain the organization's responsibility, and the C3PAO will evaluate both the documentation and the evidence for each.

Redoubt Forge makes scoping approachable, not overwhelming. You do not need to be a CMMC expert to start. Rampart captures your system description, and Artificer guides the process by asking targeted questions: What does this system do? What contracts does it support? Where does CUI enter and leave? Artificer adapts its questions based on what Sentinel has already discovered about your environment. Sentinel runs continuous discovery across your connected accounts and infrastructure, enumerating every resource, configuration, and data path. Garrison displays the discovered estate as a live inventory. When Sentinel identifies resources handling CUI that fall outside the declared scope, the platform flags a potential boundary gap. When Sentinel finds resources inside the boundary that have no CUI data flow justification, it flags those as candidates for scope reduction. The result is a boundary definition backed by real discovery data and guided by intelligence that asks the right questions, not a documentation exercise that requires your team to already know everything before they can begin.

04
Step 2: Self-Assess Current Posture
Per-Practice Scoring Across Three Dimensions. Before Your C3PAO Arrives.

Self-assessment evaluates your organization against all 110 CMMC Level 2 practices before you engage a C3PAO. For each practice, the assessment must determine: is the control implemented, partially implemented, or not implemented? What evidence exists to prove it? Is that evidence current, or has it aged past its usefulness? Self-assessment is the honest gap analysis that determines whether the organization is ready for a C3PAO engagement or needs more remediation time. This evaluation must be thorough and unflinching. Organizations that skip rigorous self-assessment, or that allow optimism to inflate their self-reported status, risk entering a formal assessment unprepared. The consequence is wasted assessment fees, failed certification, and months of delay while the organization remediates gaps it should have identified internally. A C3PAO engagement costs tens of thousands of dollars. Discovering fundamental gaps during that engagement is the most expensive way to find them.

Self-assessment fails most often because of optimism bias. The team that built the control rates it as "implemented" when an independent assessor would rate it "partially implemented." The distinction matters: a practice is not MET because a policy exists. It is MET because the policy is enforced, the enforcement is observable, and the evidence of enforcement is current. Organizations routinely confuse evidence of existence (a document that describes an access control policy) with evidence of operation (logs proving the policy denied unauthorized access attempts last week). These are fundamentally different evidentiary standards, and a C3PAO will not accept one in place of the other. Self-assessment must score each practice across multiple dimensions: is the defense working, can you prove it is working, and is that proof current? Collapsing these into a single pass/fail obscures critical gaps. A practice with strong technical controls but six-month-old evidence is not assessment-ready. A practice with fresh evidence of a misconfigured control is actively failing. Discovering these distinctions during a C3PAO engagement, rather than during self-assessment, converts a $5,000 internal exercise into a $50,000 failed certification with months of remediation before you can re-engage.

The inheritance model accelerates self-assessment for organizations that manage multiple systems. When you register a new system in Rampart, it inherits applicable controls from two levels automatically. Organization-level policies cover controls that apply uniformly: access control procedures, incident response plans, personnel security policies, security awareness training programs, media protection policies. These are defined once and inherited by every system in the organization. Infrastructure-level baselines cover controls that apply to all systems deployed on shared infrastructure: AWS landing zone configurations, centralized logging pipelines, network segmentation architectures, identity provider integrations, encryption key management. These are defined per infrastructure deployment and inherited by every system within that deployment. In practice, a new system starts up to 70% assessed before you evaluate a single system-specific practice. The remaining 30% covers system-unique implementations: application-level access controls, system-specific audit events, data flow protections unique to that system's CUI handling patterns. Rampart cross-maps every practice to NIST 800-171 simultaneously, because CMMC Level 2 and 800-171 are the same 110 security requirements expressed in different organizational structures. Work done during self-assessment feeds both frameworks without duplication.

05
Step 3: Compute Your SPRS Score
The Number on Your DFARS Report. Computed Continuously from Live Data.

The Supplier Performance Risk System score quantifies your NIST 800-171 implementation status as a single number between -203 and 110. A score of 110 indicates full implementation of all 110 security requirements. Each unimplemented practice carries a weighted deduction based on the security significance of the gap. Not all practices carry equal weight. Failures in critical control families like Access Control (AC), System and Communications Protection (SC), and System and Information Integrity (SI) carry larger deductions than failures in Awareness and Training (AT) or Physical Protection (PE). The score is required for DFARS 252.204-7012 reporting, posted to the SPRS portal by the prime contractor or subcontractor, and is increasingly a factor in source selection and contract award decisions. Some contracting officers now specify minimum SPRS score thresholds in solicitations. A score of 110 is not required in all cases, but the gap between the reported score and the minimum threshold determines eligibility. The score is not aspirational. It must reflect the organization's current implementation status.

Reporting an inaccurate SPRS score is not a compliance oversight. It is a legal liability. The Department of Justice Civil Cyber-Fraud Initiative has pursued cybersecurity-related False Claims Act cases and established clear precedent that misrepresenting compliance status in government contracting is an enforcement priority. The False Claims Act allows qui tam (whistleblower) lawsuits, meaning a current or former employee, subcontractor, or competitor can initiate legal action on behalf of the government. Penalties include treble damages (three times the government's loss), per-claim fines, and potential debarment from government contracting. Organizations that report a score of 90 when their actual implementation status warrants a score of 45 face material legal exposure. The SPRS score must reflect reality, not aspiration. Every practice marked as implemented must be demonstrably implemented with verifiable evidence. Every unimplemented practice must be accurately reported with a corresponding Plan of Action and Milestones.

Artificer computes your SPRS score from live assessment data in Rampart. Not a manual spreadsheet. Not a point-in-time calculation that goes stale before you submit it to the portal. The score updates continuously as practices are assessed, remediated, or regressed. Artificer applies the DoD weighting methodology to each practice, incorporating the security value of each control family and the specific deduction assigned to each unimplemented requirement. The platform identifies which practices drive the largest deductions and which remediations deliver the greatest score improvement per unit of effort. When your team closes a gap, the score reflects it immediately. When infrastructure drifts and a previously satisfied practice degrades because a configuration changed or an evidence artifact expired, the score reflects that degradation too. You always know your current number, your trajectory over time, and exactly which practices to address for maximum score improvement. The delta between your current score and your target score is decomposed into a prioritized remediation list that maps directly to the action queue in Citadel.

06
Step 4: Remediate and Close Gaps
POA&M Development. Prioritized Action. Hardened Infrastructure from Armory.

For every practice scored NOT MET, the organization must develop a Plan of Action and Milestones (POA&M): what the gap is, what resources are needed to close it, who is responsible, and when it will be resolved. Effective remediation is not a flat list of tasks sorted by severity. It requires prioritization that accounts for cross-practice dependencies, infrastructure prerequisites, and the relative impact of each gap on overall security posture. Some practices affect multiple control families and carry outsized SPRS deductions. Some require infrastructure changes that take weeks to implement and test. Some require only policy documentation and management approval. Some have dependencies: you cannot implement continuous monitoring (SI.L2-3.14.6) without first establishing a baseline configuration (CM.L2-3.4.1). The sequence of remediation matters. Resource allocation across competing priorities matters. The timeline relative to your planned C3PAO engagement matters. Organizations that treat remediation as an undifferentiated task list spend effort on low-impact items while high-impact gaps remain open.

Remediation without cross-practice visibility is where most organizations stall. Teams address gaps in isolation, unaware that closing one practice depends on infrastructure changes required by another. You cannot implement continuous monitoring (SI.L2-3.14.6) without first establishing a baseline configuration (CM.L2-3.4.1). You cannot enforce least privilege (AC.L2-3.1.5) without completing the access control policy that defines authorized privileges (AC.L2-3.1.1). These dependency chains are invisible in a flat task list sorted by severity. Without understanding them, teams spend weeks remediating a downstream practice only to discover the prerequisite is still open. Resource allocation compounds the problem. Security engineering, IT operations, and policy authors all compete for the same remediation bandwidth, and CMMC preparation competes with operational work that cannot stop. Prioritization requires knowing which gaps carry the largest SPRS deductions, which remediations satisfy multiple practices simultaneously, and which infrastructure changes enable cascading closures across control families. The timing pressure is real: C3PAO engagement dates are scheduled months in advance, and postponing an assessment carries its own costs in contract compliance, prime contractor pressure, and source selection risk. Organizations that enter remediation without a dependency-aware, impact-ranked plan burn months on low-value work while high-impact gaps persist.

Armory provides hardened Terraform modules that satisfy CMMC practices from the first deployment. An encryption module that satisfies SC.L2-3.13.11 (CUI Encryption) by configuring storage-level and transit-level encryption with key management policies that meet NIST requirements. A logging module that satisfies AU.L2-3.3.1 (System Auditing) by deploying centralized log collection with tamper-evident storage, retention policies, and alerting on audit processing failures. A network segmentation module that satisfies SC.L2-3.13.1 (Boundary Protection) by establishing network boundaries between CUI and non-CUI segments with enforced traffic inspection and deny-by-default ingress rules. These are not templates that require manual configuration. They are deployable infrastructure code with STIG parameters and CIS benchmark configurations built in. Deploy the module, and the practice is satisfied by design. The infrastructure IS the evidence. Sentinel monitors every remediation action and re-evaluates affected practices as changes take effect. For certain infrastructure drift scenarios, Sentinel can auto-remediate after approval: if a storage bucket loses its encryption configuration, Sentinel detects the drift and restores the compliant state automatically within your defined change windows. Each closed gap updates your SPRS score, your practice coverage percentage, and your readiness posture across all mapped frameworks in real time.

07
Step 5: Engage Your C3PAO
Evidence Package Preparation. Scoped Assessor Access via Alliance.

Engaging a C3PAO begins with selection through the Cyber AB marketplace at CyberAB.org. The C3PAO confirms the assessment scope, reviews the system description and authorization boundary documentation, establishes evidence package requirements, and schedules the assessment timeline. Preparation for the engagement includes compiling practice narratives (implementation descriptions for each of the 110 practices that explain how the organization satisfies the requirement), cross-referencing evidence artifacts to each practice, confirming POA&M status for known gaps with remediation timelines, identifying personnel who will participate in interviews for each control family, and documenting operational processes for observation. The quality and completeness of this preparation determines the assessment's pace. Assessors who receive well-organized evidence packages with clear practice-to-evidence mappings complete their review efficiently. Assessors who receive disorganized artifacts, missing cross-references, or narratives that do not match the observed environment spend weeks requesting clarification. That delay is the organization's cost, not the C3PAO's.

The most common cause of slow, expensive C3PAO assessments is a disorganized evidence package. Assessors who receive artifacts scattered across shared drives, email threads, and ticket systems spend days reconstructing which evidence supports which practice. Narratives that describe intended implementations rather than observed reality create immediate credibility problems: when the assessor interviews your system administrator and the answers do not match the written narrative, every subsequent narrative is treated with skepticism. Missing cross-references between practices and their supporting evidence force the assessor to request clarification for each gap, turning a three-week assessment into a six-week assessment. The cost of that delay falls entirely on the organization. Beyond time, disorganized evidence packages create substantive risk. A practice narrative for AC.L2-3.1.1 that claims role-based access control is enforced but provides no evidence of the specific IAM policies, no access review logs, and no configuration snapshots from the identity provider gives the assessor no choice but to mark the practice NOT MET. The evidence existed in the environment but was never collected, organized, or cross-referenced to the practice it supports. Every hour the assessor spends hunting for evidence is an hour not spent validating your actual security posture.

Alliance grants your C3PAO time-bound, read-only access to Rampart. The assessor navigates your controls, evidence, findings, and narratives independently without relying on your team to pull artifacts on demand during the assessment. They can view every practice, drill into the evidence chain for each, examine the scoring methodology, and download artifacts for their own records. Every action the assessor takes within Alliance is logged: what they viewed, what they downloaded, when they accessed it, and from which network location. This creates a chain of custody for the assessment itself, proving that the assessor had access to the evidence they reference in their report. OSCAL export produces machine-readable assessment packages in the format specified by NIST for C3PAOs that accept structured data submissions. The assessor receives verifiable proof from running systems, organized by practice family, with immutable evidence chains and complete provenance. Not a binder of screenshots assembled the week before the assessment. Not a shared drive folder with ambiguous filenames. A structured, navigable, evidence-backed compliance package.

08
Step 6: The Assessment
MET. NOT MET. NOT APPLICABLE. Immutable Evidence for Every Practice.

The C3PAO evaluates each of the 110 practices against three outcomes: MET (the practice is implemented and operating effectively as described), NOT MET (the practice is not implemented, not operating as described, or the evidence is insufficient to demonstrate effectiveness), and NOT APPLICABLE (the practice does not apply based on the scoped authorization boundary and system description). Evidence review is the foundation of the assessment. The assessor examines artifacts for each practice, interviews personnel responsible for implementing and operating controls, and observes processes in action. The assessment is not a single-day event. It typically spans multiple weeks: an initial document and evidence review phase, a detailed evidence validation phase where the assessor verifies that artifacts correspond to the described implementations, on-site activities including interviews and process observations, and a reporting phase where findings are compiled and communicated. The rigor of this process is why preparation matters.

During the assessment period, the platform provides a point-in-time snapshot frozen at the assessment start date. Your team continues operational work in the live environment. The assessor reviews the frozen snapshot, ensuring that the evidence they evaluate is stable and consistent throughout their review period. Fixes implemented during the assessment period are tracked separately through real-time event projection: the assessor can see both the frozen state (what the system looked like at assessment start) and the current state, with a diff showing what changed and when. This allows the C3PAO to credit remediation actions taken during the assessment window without losing the integrity of the original evidence package. Every practice in the platform carries a complete provenance chain: what defense satisfies it, what evidence supports that defense, who verified the evidence, when it was last verified, which source system produced the proof, and the full chain of custody from collection to presentation.

Every compliance event in the platform is stored as an immutable record with a SHA-256 integrity hash, OpenTelemetry trace ID, user ID, session ID, and timestamp. The assessor can verify that evidence has not been modified after collection. This is not a trust assertion. It is cryptographic proof. If an evidence artifact was collected on March 15 and the assessment begins on April 1, the assessor can verify that the artifact's integrity hash has not changed between those dates. The audit trail is not a separate document maintained alongside the evidence. The events ARE the audit trail. Every configuration snapshot, scan result, policy approval, access review, and remediation action is recorded as an immutable event with full provenance metadata. For assessors accustomed to reviewing static evidence packages (PDF exports, screenshots, signed attestation letters), this represents a fundamentally different verification model. Instead of checking whether screenshots match narrative descriptions, the assessor verifies whether the system's own event stream demonstrates continuous control satisfaction across the entire evidence collection period.

09
Step 7: Conditional Certification
The 180-Day Remediation Window. POA&M Tracking. Final Certification.

Organizations that meet most practices but have open gaps receive Conditional CMMC Status. This is not a failure. It is a structured remediation path defined by the CMMC rule. The organization has 180 calendar days from the date of the conditional determination to close all open POA&M items and achieve Final CMMC Certification. If the gaps are not closed within that window, the conditional status expires. There is no extension mechanism in the current rule. There is no appeal process for missed deadlines. The 180-day clock is firm, and organizations must plan their remediation capacity accordingly. Not all gaps are equally tractable: infrastructure changes may require procurement cycles, policy changes may require management approval chains, and personnel training requirements have scheduling constraints. The 180-day window requires a realistic remediation plan on day one, not an optimistic assumption that everything will resolve in time.

Each open POA&M item during the conditional period must have a defined remediation plan with specific technical steps, a responsible party with authority and resources to execute, explicit resource allocation (budget, personnel, infrastructure), and a target closure date that falls within the 180-day window. Progress must be demonstrable, not claimed. The C3PAO may conduct a follow-up assessment to verify that POA&M items have been genuinely remediated, not just administratively closed. This follow-up examines whether the remediation action actually satisfies the practice: updated configurations must be deployed and operational, new policies must be approved and distributed, training must be completed and documented, technical controls must be implemented and producing evidence. Documentation of remediation actions, updated evidence artifacts, and re-assessment of affected practices against all three practice outcomes (MET, NOT MET, NOT APPLICABLE) are all expected during the closeout verification.

Rampart tracks every POA&M item against the 180-day deadline with countdown visibility from day one. Each item displays days remaining, current remediation status, assigned owner, and linked evidence showing progress. Sentinel monitors remediation progress in real time: when infrastructure changes are made to close a gap (a new encryption configuration deployed, a logging pipeline activated, a network segmentation rule applied), Sentinel detects the change, collects new evidence from the updated infrastructure, and Rampart re-evaluates the affected practice score. Artificer flags items approaching their deadline without sufficient progress based on the gap between current status and required completion. Escalation triggers fire at configurable intervals: 30 days remaining, 14 days, 7 days. The escalation path follows the organization's notification chain, ensuring that remediation items do not age silently into expiration. When all POA&M items are closed and verified, the platform generates the final evidence package for the C3PAO's closeout review.

10
Step 8: Maintain Continuous Compliance
Post-Certification Monitoring. Posture Preservation Between Assessment Cycles.

Achieving CMMC certification marks the start of an ongoing obligation, not the end of a project. CMMC Level 2 certification is valid for three years, with the expectation that the organization maintains its security posture continuously throughout that period. Infrastructure changes daily. Personnel turn over. New contracts introduce new CUI handling requirements that expand the authorization boundary. Cloud providers update services, deprecate features, and modify shared responsibility models. Threat models evolve as adversaries develop new techniques. Without continuous monitoring, the security posture that earned certification degrades. Evidence that was fresh at certification time ages past its usefulness. Controls that were operating effectively stop operating when the people who maintained them leave or when the infrastructure they protected is reconfigured. Configuration baselines drift as operational teams make changes without assessing compliance impact. The next assessment cycle arrives, and the organization faces the same scramble it endured the first time: re-collecting evidence, re-verifying controls, re-narrating practices that may no longer reflect the running environment.

The three years between CMMC assessments are where certification quietly erodes. Evidence staleness is the most pervasive threat: configuration snapshots collected during the initial assessment age past their usefulness within months. An access review completed in January does not prove access control effectiveness in October. A vulnerability scan from Q1 does not demonstrate flaw remediation compliance in Q3. Without continuous evidence collection, the organization accumulates a growing gap between its certified state and its actual state. Infrastructure drift compounds the problem. Operational teams modify security group rules, adjust encryption settings, reconfigure logging pipelines, and deploy new resources without evaluating the compliance impact of each change. A single widened firewall rule can affect SC.L2-3.13.1 (Boundary Protection) and AC.L2-3.1.3 (Control CUI Flow) simultaneously, but no one maps the change to the affected practices until the next assessment preparation cycle. Personnel turnover introduces a different category of risk. The engineer who built the monitoring architecture leaves. The compliance analyst who maintained the evidence library transfers to another project. Institutional knowledge of which controls depend on which infrastructure components disappears. When the next assessment cycle arrives, the organization faces a cold start: re-discovering its own security posture, re-collecting evidence from scratch, and re-narrating practices that may no longer reflect the running environment.

The convergence loop ties every platform capability into continuous posture maintenance. Vanguard scan results feed the compliance engine continuously. New vulnerabilities discovered in application code or infrastructure configurations are mapped to affected CMMC practices. A new finding in a web application maps to SI.L2-3.14.1 (Flaw Remediation) and SC.L2-3.13.8 (CUI on Public Systems) depending on the system's exposure profile. Garrison tracks the complete infrastructure estate and detects unauthorized changes: new resources deployed outside the declared authorization boundary, decommissioned resources that still appear in the SSP, and infrastructure modifications that were not routed through change management. Artificer monitors posture trends across all dimensions (defense effectiveness, evidence coverage, evidence freshness) and surfaces degradation before it becomes an assessment finding. The platform converges your systems toward the declared desired state continuously, without waiting for a quarterly review cycle, an upcoming assessment deadline, or a compliance manager's reminder. When your next C3PAO engagement begins in three years, it starts from a position of demonstrated continuous compliance with a complete evidence history. Not a cold start. Not a scramble. A continuation.

11
Cross-Framework Leverage
One Security Posture. Every Derived Framework Computed Simultaneously.

The derivation chain between compliance frameworks is structural, not approximate. CMMC Level 2 IS NIST 800-171 rev2. The 110 practices are the same 110 security requirements. NIST 800-171 derives from the NIST 800-53 Moderate baseline: each 800-171 requirement traces to one or more specific 800-53 controls through a published mapping in NIST SP 800-171A. FedRAMP baselines (Low, Moderate, High, LI-SaaS) are specific control selections from the same NIST 800-53 catalog. SOC 2 Trust Service Criteria map to 800-53 control families through published cross-walks maintained by the AICPA. ISO 27001:2022 Annex A controls have NIST-published mappings to 800-53 through the NIST Cybersecurity Framework. These relationships are deterministic and auditable. Work done for CMMC simultaneously satisfies controls in every framework that traces back to the same NIST lineage. The investment in CMMC compliance is not a single-use expenditure. It compounds across your entire compliance portfolio. An organization that achieves CMMC Level 2 has already satisfied a substantial percentage of the controls required for FedRAMP Moderate, SOC 2 Type II, and ISO 27001 certification.

A concrete example demonstrates how the derivation chain works in practice. CMMC practice AC.L2-3.1.1 (Limit system access to authorized users, processes acting on behalf of authorized users, and devices) traces to NIST 800-171 requirement 3.1.1, which derives from NIST 800-53 control AC-2 (Account Management). In FedRAMP Moderate, this same AC-2 control applies with FedRAMP-specific parameter requirements for account review frequency, automated enforcement mechanisms, and evidence collection intervals. In SOC 2, the same underlying capability maps to CC6.1 (Logical and Physical Access Controls) under the Common Criteria. In ISO 27001:2022, it maps to A.8.2 (Privileged Access Rights) in Annex A. One CMMC practice assessed. One set of defenses implemented. One evidence chain collected. Five frameworks advanced. The derivation chain is deterministic and auditable at every link: from the CMMC practice to the 800-171 requirement to the 800-53 control to the target framework's specific criterion. No interpretation required. No approximate alignment. Structural derivation.

Rampart maintains the cross-reference engine that resolves these derivation chains through five strategies: native control mapping (direct practice-to-practice relationships published by the framework authority), NIST 800-53 derivation chain tracing (following the path from any framework back through 800-53 to any other framework that derives from it), NIST CSF 2.0 bridging (using the Cybersecurity Framework's function/category/subcategory structure as an intermediary between frameworks that lack direct mappings), published cross-walks from authoritative sources (AICPA for SOC 2, ISO for 27001, NIST for all NIST publications), and AI-suggested mappings that require human confirmation before activation. As you satisfy CMMC practices, Rampart computes your readiness percentage for every other framework in the catalog in the background. The computation is not a summary estimate. It resolves each individual control relationship through the derivation chain and accounts for framework-specific parameter differences (FedRAMP may require the same control as CMMC but with a different review frequency or evidence retention period). When you activate a new framework assessment, it arrives pre-populated from your existing CMMC work. The marginal effort to add each subsequent framework decreases because the control overlap compounds through the derivation chain. One security posture. Every framework computed.

Something is being forged.

The full platform is under active development. Reach out to learn more or get early access.