SOC 2 Compliance. Continuous Evidence. Not Quarterly Screenshots.

SOC 2 Compliance Platform

Five Trust Service Criteria assessed from running infrastructure. Continuous evidence collection across your entire observation period. Type I design verification and Type II operating effectiveness from the same platform. Immutable compliance proofs for your auditor. Every criterion traced to its underlying NIST 800-53 control.

Security posture generates compliance proofs. Not the other way around.

SOC 2 Type II requires evidence of continuous control operation across an observation period of three to twelve months. Most organizations collect that evidence in bursts: quarterly screenshot cycles, manual artifact assembly, rushed evidence packages before the auditor arrives. Redoubt Forge inverts that sequence. Start with actual defenses; the platform observes your security posture, maps it to every Trust Service Criterion, collects evidence continuously from connected infrastructure, and generates immutable compliance proofs for your CPA firm.

01
What Is SOC 2
The AICPA's Trust Framework for Service Organizations.

SOC 2 is a reporting framework developed by the American Institute of Certified Public Accountants (AICPA). It defines criteria for how service organizations should manage customer data, and it is evaluated exclusively by licensed CPA firms. SOC 2 is not a certification. It is an attestation report: the auditor examines your controls, tests their design and operating effectiveness, and issues an opinion on whether those controls meet the applicable Trust Service Criteria (TSC). The distinction matters. A certification says "this organization passed." An attestation report says "this auditor examined these controls and here is what they found." The report includes the auditor's opinion, a description of the system under examination, the criteria tested, the tests performed, and the results. Two report types exist. A Type I report evaluates the design of controls at a specific point in time: are the right controls in place as of this date? A Type II report evaluates both design and operating effectiveness over a sustained observation period, typically three to twelve months: were the controls not only designed correctly but operating consistently throughout the period? Type II is what enterprise customers require because it demonstrates sustained discipline, not a single-day snapshot.

SOC 2 is built on five Trust Service Criteria. Security (the Common Criteria, CC1 through CC9) is required for every SOC 2 engagement. It covers logical and physical access controls, system operations, change management, and risk mitigation. The remaining four criteria are optional: Availability (the system is available for operation and use as committed), Processing Integrity (system processing is complete, valid, accurate, timely, and authorized), Confidentiality (information designated as confidential is protected as committed), and Privacy (personal information is collected, used, retained, disclosed, and disposed of in conformity with the organization's commitments). Organizations select which criteria apply based on their service commitments and the nature of the data they handle. A SaaS platform handling financial data might include Security, Availability, and Confidentiality. A healthcare data processor might include Security, Availability, Confidentiality, and Privacy. The selection decision is strategic: including unnecessary criteria adds scope and cost, while omitting relevant criteria raises questions from customers who expect coverage.

SOC 2 has become the de facto trust standard for SaaS companies, cloud service providers, managed service providers, and technology companies that handle customer data. Enterprise customers routinely require SOC 2 Type II reports as a condition of vendor onboarding. Investors evaluate SOC 2 status during due diligence. Partners include SOC 2 requirements in their contractual terms. Unlike government frameworks such as CMMC or FedRAMP, SOC 2 is entirely market-driven. No regulation mandates it. No government agency enforces it. Its authority derives from universal adoption: when every enterprise procurement team asks for a SOC 2 report, the framework becomes a market requirement with the force of a regulatory one. Organizations without a current SOC 2 Type II report face longer sales cycles, deal-blocking security questionnaires, and lost opportunities to competitors who can produce a clean report on demand. The report is not a differentiator. It is table stakes. The question is not whether you need SOC 2. The question is how you maintain it continuously without consuming your engineering team.

02
The Problem
Why Traditional SOC 2 Preparation Fails. What It Costs When It Does.

The fundamental contradiction of traditional SOC 2 preparation is that Type II reports evaluate continuous control operation, but evidence is collected in periodic bursts. The audit firm tests whether controls operated effectively over a three-to-twelve-month observation period. That means they need evidence from throughout the period, not just from the week before the audit. Yet most organizations run quarterly evidence collection cycles: engineers take screenshots of access control configurations, export firewall rules, pull user access lists, capture change management records, and upload everything to a shared drive or GRC platform. Between those collection cycles, the environment changes. Security groups are modified. IAM policies are updated. New services are deployed. Personnel changes alter access patterns. The evidence collected in January describes a system that no longer exists in April. The evidence collected in April describes a system that will not exist in July. By the time the auditor begins their review, the evidence package contains a series of point-in-time snapshots that fail to demonstrate the continuous operation the report is supposed to attest.

The gap year problem compounds this failure. Between audit periods, controls degrade without detection. The annual access review that demonstrated CC6.1 compliance in last year's audit has not been conducted this year because the compliance manager changed roles. The change management process that satisfied CC8.1 has been bypassed for three months because a new development team routes changes through a different workflow. Monitoring configurations that demonstrated CC7.2 have been modified to reduce alert volume, inadvertently disabling detection capabilities that the auditor tested. Type II requires evidence that controls operated consistently throughout the observation period. Controls that degrade between audits create gaps that surface only when the next audit begins. At that point, the observation period has already started, and gaps in evidence from the early months cannot be retroactively filled. The organization either delays the audit to establish a clean observation period (losing months of progress) or proceeds with known evidence gaps that may result in exceptions or qualified opinions in the report.

The cost of a failed or qualified SOC 2 audit extends far beyond the engagement fees. Enterprise deals stall when procurement teams discover exceptions in the report. A qualified opinion on Security criteria signals to potential customers that the organization's controls were not operating effectively during the period examined. Sales teams spend cycles explaining the exceptions, negotiating risk acceptance letters, and watching deals slip while competitors with clean reports advance. Exception notes in the report remain visible for the life of that report, typically twelve months until the next Type II is issued. Remediation costs compound: re-engaging the auditor for a bridge letter or early re-examination adds fees. Re-collecting evidence for the remediated period adds engineering hours. Re-establishing trust with prospects who received the qualified report adds sales cycles. Organizations that treat SOC 2 as an annual project rather than a continuous discipline discover that the cost of catching up always exceeds the cost of staying current.

03
The Five Trust Service Criteria
Security. Availability. Processing Integrity. Confidentiality. Privacy.

Security (CC1 through CC9) is the Common Criteria and the only required category in every SOC 2 engagement. It spans nine control areas that form the foundation for all other criteria. CC1 covers the Control Environment: governance, organizational structure, accountability, and the tone at the top that determines how seriously the organization treats security. CC2 addresses Communication and Information: how security policies are communicated, how incidents are reported, and how external parties are informed of obligations. CC3 covers Risk Assessment: identifying threats, evaluating vulnerabilities, and assessing potential impact to the organization's objectives. CC4 establishes Monitoring Activities: ongoing evaluation of whether controls continue to function as designed. CC5 addresses Control Activities: the specific policies, procedures, and technical measures that implement the organization's security objectives. CC6 covers Logical and Physical Access Controls: authentication, authorization, access provisioning, and access revocation. CC7 addresses System Operations: monitoring for anomalies, incident detection, and incident response. CC8 covers Change Management: how changes to infrastructure, software, and processes are authorized, tested, and deployed. CC9 addresses Risk Mitigation: vendor management, business continuity, and the controls that address risks identified during assessment.

Availability applies when the organization commits to specific uptime or performance levels for its service. It requires controls that ensure the system remains operational and accessible as agreed: capacity planning, disaster recovery, backup procedures, and incident response for availability events. Organizations offering SaaS platforms with SLA commitments typically include this criterion. Processing Integrity applies when the system processes transactions or data transformations where accuracy matters. It requires controls that ensure processing is complete, valid, accurate, timely, and authorized. Financial processing platforms, data pipelines, and any system where incorrect output has material consequences should include this criterion. Confidentiality applies when the organization handles information classified as confidential by contract, regulation, or organizational policy. It requires controls that protect confidential information throughout its lifecycle: collection, processing, storage, transmission, and disposal. This criterion is distinct from Privacy; confidentiality applies to any confidential data (trade secrets, financial models, intellectual property), not just personal information.

Privacy is the most complex of the five criteria. It applies when the organization collects, uses, retains, discloses, or disposes of personal information. Privacy maps to the AICPA's privacy principles: Notice (informing individuals about data practices), Choice and Consent (providing options regarding data use), Collection (collecting only the personal information needed), Use/Retention/Disposal (limiting use to stated purposes and retaining data only as long as needed), Access (allowing individuals to review and update their personal information), Disclosure (restricting disclosure to authorized parties), Security (protecting personal information), and Quality (maintaining accurate, complete personal information). Organizations that process personal data on behalf of customers, such as HR platforms, healthcare technology companies, and consumer-facing applications, should evaluate whether Privacy is required. Including Privacy adds significant scope: the auditor examines not just technical controls but also the organization's data governance practices, privacy notices, consent mechanisms, data subject access request processes, and data retention schedules. The decision to include or exclude Privacy should be deliberate and documented, driven by the nature of the data handled and the commitments made to customers and data subjects.

04
Step 1: Scope
Select Criteria. Define System Boundaries. Identify Every Component Under Examination.

Scoping a SOC 2 engagement requires two decisions: which Trust Service Criteria apply, and what system boundaries define the examination. The criteria selection determines what the auditor evaluates. Security is mandatory. Each additional criterion adds scope, evidence requirements, and audit cost. The selection should reflect the organization's actual service commitments and the expectations of its customers. Including Availability when you offer no SLA commitments adds burden without value. Excluding Confidentiality when you handle customer trade secrets creates a gap that procurement teams will question. The system boundary defines which infrastructure, applications, people, procedures, and data the auditor examines. Every component that supports the service described in the report must be included: compute resources, databases, network infrastructure, identity providers, third-party integrations, operational processes, and the personnel who operate and maintain the system. Boundary decisions have cascading effects. Components excluded from scope receive no audit scrutiny, no evidence collection, and no assurance in the final report.

Common scoping challenges mirror those in other framework assessments. Overly broad boundaries include systems that have no bearing on the service under examination, inflating evidence collection burden and audit fees without adding meaningful assurance. Overly narrow boundaries exclude components that support the service, creating gaps that auditors may identify during fieldwork and that customers will question when they review the system description. Shared infrastructure creates particular complexity: identity providers, logging pipelines, monitoring systems, and network devices that serve both in-scope and out-of-scope systems must be addressed. Cloud deployments add the shared responsibility dimension: the organization must document which controls the cloud provider satisfies (and reference the provider's own SOC 2 report for those controls) and which remain the organization's responsibility. Third-party subservice organizations that process data on the organization's behalf may be included in scope (the inclusive method, where the auditor tests the subservice organization's controls directly) or excluded with a carve-out (the carve-out method, where the report notes the reliance without testing).

Redoubt Forge makes scoping approachable, not overwhelming. Rampart captures your system description, selected Trust Service Criteria, and boundary definitions in a structured format that maps directly to the SOC 2 system description requirements. Artificer guides the process by asking targeted questions: What service does this system provide? What data does it handle? What commitments have you made to customers regarding availability, confidentiality, or privacy? Artificer adapts its questions based on what Sentinel has already discovered about your environment. Sentinel runs continuous discovery across connected accounts and infrastructure, enumerating every resource, configuration, and data path. Garrison displays the discovered estate as a live inventory. When Sentinel identifies resources that support the described service but fall outside the declared boundary, the platform flags a potential scope gap. When Sentinel finds resources inside the boundary with no connection to the service, it flags those as candidates for exclusion. The result is a boundary definition backed by real discovery data, not a documentation exercise completed from memory.

05
Step 2: Readiness
Gap Assessment Against Selected Criteria. Before You Engage the Auditor.

A readiness assessment evaluates your organization against the selected Trust Service Criteria before you engage a CPA firm. For each criterion and its underlying control points, the assessment must determine: is the control implemented? Is it designed correctly? What evidence exists to prove it? Is that evidence current? Readiness is the honest internal evaluation that determines whether the organization is prepared for a formal engagement or needs more remediation time. Organizations that skip readiness, or that allow optimism to inflate their self-assessment, risk entering a formal audit with fundamental gaps. The consequence is wasted engagement fees, extended audit timelines, and exceptions in the final report that erode the report's value. A SOC 2 engagement with a reputable CPA firm costs tens of thousands of dollars. Discovering that your change management process does not meet CC8.1 requirements during that engagement is the most expensive way to find out.

The core difficulty of readiness is distinguishing between control design and control operation. A control can be correctly designed on paper and still fail in practice. An access review policy may specify quarterly reviews, but if those reviews have not actually occurred, the control is designed but not operating. SOC 2 Type II specifically evaluates operating effectiveness over a sustained period, which means controls that exist only as documented procedures will not survive auditor scrutiny. Evidence gaps present a related challenge: a control may be both well-designed and consistently operated, yet lack the artifacts to prove it. Firewall rules may enforce network segmentation perfectly, but without configuration snapshots, change logs, and review records, the auditor has nothing to examine. The inverse is also common: organizations collect mountains of evidence for controls that were never designed to meet the applicable criterion. Engaging a CPA firm before resolving these distinctions wastes engagement fees, extends audit timelines, and risks exceptions in the final report that undermine its value to relying parties. Readiness must answer three questions for every control point: does the control exist, does it operate as designed, and can you prove it?

The readiness assessment reveals exactly where gaps exist before the auditor arrives. Rampart categorizes gaps by type: missing controls (the control does not exist), design gaps (the control exists but is not designed to meet the criterion), operating gaps (the control is designed correctly but is not operating consistently), and evidence gaps (the control operates correctly but lacks sufficient evidence to demonstrate it). Each gap type requires a different remediation approach. Missing controls need implementation. Design gaps need re-engineering. Operating gaps need process reinforcement and monitoring. Evidence gaps need collection automation. Identifying which type of gap applies to each control point is critical for efficient remediation. Organizations that treat every gap as "not done yet" waste effort re-implementing controls that were designed correctly but lack evidence, or collecting evidence for controls that were never designed to meet the criterion in the first place. The readiness assessment provides the diagnostic precision that makes remediation targeted rather than wasteful.

06
Step 3: Remediate
Close Gaps Before the Observation Period Begins.

Remediation must be completed before the observation period begins. This timing is critical and frequently misunderstood. Type II evaluates operating effectiveness over a sustained period. Controls implemented halfway through the observation period can only demonstrate effectiveness for the remaining months. An auditor examining a twelve-month period will note that a control was implemented in month six and can only attest to six months of operation. Some auditors will issue an exception for the incomplete period. Others will recommend shortening the observation window to cover only the period after remediation. Either outcome weakens the report. The strategic approach is to complete all remediation, establish all controls, and begin evidence collection before the observation period clock starts. This gives every control the full period to demonstrate continuous operation. Remediation is not just a technical exercise. It spans four domains: policy (documented procedures and standards), process (operational workflows that implement those policies), infrastructure (technical controls that enforce policies automatically), and training (personnel who understand and follow the procedures).

Timing failures derail more SOC 2 remediation efforts than technical complexity. Organizations underestimate how long it takes to close gaps across all four domains simultaneously. A policy rewrite requires legal review, management approval, and communication to affected personnel. An infrastructure change requires development, testing, change approval, deployment, and validation. A training program requires content development, scheduling, delivery, and completion tracking. When these workstreams run in sequence rather than parallel, the observation period start date slips repeatedly. Dependencies between controls compound the problem: you cannot demonstrate effective monitoring (CC7.2) without first establishing baseline configurations and change management processes (CC8.1). Incomplete controls create a different risk. A control implemented halfway through the observation period can only demonstrate six months of operation in a twelve-month window. Some auditors issue an exception for the incomplete period. Others recommend shortening the observation window. Either outcome weakens the report and forces a longer path to a clean Type II. Policy gaps are the most common blind spot. Teams focus on technical controls and discover during the observation period that supporting policies were never finalized, never approved, or never communicated to the workforce.

Redoubt Forge ranks every open gap by posture impact: how many control points does closing this gap satisfy, and how many frameworks beyond SOC 2 does it advance? A gap that affects CC6.1 (Logical and Physical Access Controls) and simultaneously maps to NIST 800-53 AC-2 and ISO 27001 A.8.2 outranks a gap that affects only a single SOC 2 control point, because closing the first gap moves the organization further toward readiness across its entire compliance portfolio. The platform recommends remediation paths based on your current assessment state, dependency chains between controls, and available resources. It identifies prerequisite relationships and sequences remediation accordingly, ensuring you resolve dependencies before the controls that depend on them. Armory provides hardened Terraform modules that satisfy SOC 2 control points from the first deployment: encryption modules for CC6.1, logging modules for CC7.2, network segmentation modules for CC6.6. These are deployable infrastructure code with hardened configurations built in. Deploy the module, and the control point is satisfied by design. The infrastructure IS the evidence. Sentinel monitors every remediation action and re-evaluates affected controls as changes take effect, updating your readiness percentage and posture across all mapped frameworks in real time.

07
Step 4: Type I
Point-in-Time Assessment of Control Design.

A Type I report evaluates the suitability of control design at a specific point in time. The auditor examines whether the controls described in the system description are designed to meet the applicable Trust Service Criteria as of the report date. Type I does not test operating effectiveness. It does not ask whether controls worked consistently over a period. It asks whether the right controls exist, whether they are designed correctly, and whether the system description accurately reflects the actual environment. Type I serves two primary purposes. First, it provides a formal assessment milestone for organizations preparing for their initial Type II. It validates that control design is sound before the organization commits to a sustained observation period. Second, it satisfies customers and partners who need assurance that controls exist, even without evidence of sustained operation. Many organizations pursue Type I as a stepping stone: achieve Type I to demonstrate design suitability, then transition to Type II by extending the assessment to cover an observation period.

The auditor's evaluation during a Type I engagement focuses on control descriptions, supporting documentation, and evidence that each control is designed to address the relevant criterion. For CC6.1, the auditor examines whether access control policies exist, whether they define authorization requirements, and whether the technical implementation matches the documented design. For CC8.1, the auditor examines whether change management procedures exist, whether they define approval workflows, and whether the infrastructure configuration reflects the described process. The auditor does not test whether these controls operated effectively over time. They test whether the design is suitable to meet the criterion if operated as described. This distinction matters for preparation: Type I evidence focuses on current-state documentation, configuration snapshots at the report date, and demonstration that controls exist and are configured correctly. It does not require months of operational logs, periodic review records, or evidence of sustained monitoring.

Alliance grants your auditor time-bound, read-only access to Rampart. The auditor navigates your controls, evidence, system description, and policy documentation independently without relying on your team to pull artifacts on demand during the engagement. They can view every control point, drill into the evidence chain for each, examine the scoring methodology, and download artifacts for their records. Every action the auditor takes within Alliance is logged: what they viewed, what they downloaded, when they accessed it, and from which network location. This creates a chain of custody for the engagement itself, proving that the auditor had access to the evidence they reference in their report. For Type I, Rampart presents a point-in-time snapshot of control design: current configurations, current policies, current system architecture, and current evidence. The auditor receives structured, navigable proof from running systems. Not a shared drive folder with ambiguous filenames. Not a binder assembled the week before. A verifiable evidence package organized by Trust Service Criterion.

08
Step 5: Type II
Operating Effectiveness Over the Observation Period.

A Type II report evaluates both the design and operating effectiveness of controls over a sustained observation period, typically three to twelve months. This is the report enterprise customers require. It answers the question that Type I cannot: did the controls actually work, consistently, over time? The auditor selects samples from throughout the observation period to test whether controls operated as described. For access controls (CC6.1), the auditor examines user provisioning and deprovisioning events across the full period, not just the current state. For change management (CC8.1), the auditor selects a sample of changes from different months and verifies that each followed the documented approval process. For monitoring (CC7.2), the auditor reviews alert logs, incident records, and response actions across the period to verify that anomalies were detected and addressed. The observation period is not a formality. It is the core of the engagement. Every month of the period must be represented in the evidence, and gaps in any month create the risk of exceptions in the report.

The evidence requirements for Type II are fundamentally different from Type I. Type I needs point-in-time proof that controls exist. Type II needs continuous proof that controls operated. This means logs: access logs showing authentication events across the period, change logs showing approval workflows for every deployment, monitoring logs showing alerts generated and responded to, review logs showing periodic access certifications conducted on schedule. It means records: incident response records for security events that occurred during the period, vendor review records for third-party assessments conducted on schedule, risk assessment records for periodic evaluations completed as documented. It means consistency: the access review that policy requires quarterly must have actually occurred quarterly, with documentation for each occurrence. The backup test that policy requires monthly must have evidence for every month. Gaps in periodic evidence create exceptions. The auditor cannot attest that a quarterly process operated effectively if only two of four quarters are documented.

This is where Sentinel's continuous evidence collection directly satisfies Type II's continuous requirement. Sentinel does not collect evidence in quarterly bursts. It collects evidence continuously from every connected infrastructure source: configuration snapshots, access events, change records, monitoring alerts, and operational metrics. Every day of the observation period is covered. When the auditor requests evidence for CC6.1 access controls from month three, the evidence exists because Sentinel collected it in real time during month three. When the auditor samples change management records from month seven, the records exist because Sentinel captured every change event as it occurred. Rampart organizes this continuous evidence stream by Trust Service Criterion and control point, presenting the auditor with a complete, navigable evidence chain for the entire observation period. The auditor can filter by date range, by criterion, by control point, or by evidence type. The platform demonstrates exactly what Type II demands: sustained, continuous, verifiable control operation across every month of the period. No gaps. No missing months. No scramble to reconstruct evidence that should have been collected when it happened.

09
Step 6: Monitor
Between Audit Cycles. Maintain Readiness for the Next Period.

A SOC 2 Type II report covers a specific observation period. The report is issued, distributed to customers, and remains current for approximately twelve months until the next report is issued. Between audit cycles, the organization must maintain its security posture continuously. Controls that degrade between audits create gaps that surface at the start of the next observation period. The change management process that satisfied CC8.1 last year may have drifted as new team members bypass the documented workflow. The access review cadence that demonstrated CC6.1 compliance may have slipped from quarterly to semi-annual. Monitoring configurations that proved CC7.2 effectiveness may have been modified to reduce noise, inadvertently weakening detection capabilities. These degradations accumulate silently. Without continuous monitoring, the organization enters each new audit cycle uncertain of its actual posture, relying on assumptions that controls still operate as they did when last examined. The gap year between audits is where compliance erosion happens.

Evidence freshness is the specific mechanism by which inter-audit degradation becomes visible. Every evidence artifact has a useful lifespan. A configuration snapshot from last month demonstrates current state. A configuration snapshot from eight months ago demonstrates historical state that may no longer reflect reality. Access review records from last quarter demonstrate a current process. Access review records from four quarters ago demonstrate a process that may or may not still be functioning. Most organizations lack a systematic way to track which evidence is approaching staleness across dozens of control points simultaneously. The result is predictable: teams enter the new observation period confident in their posture, only to discover during the auditor's initial walkthrough that key evidence artifacts have expired, periodic reviews were skipped, and configuration changes made months ago were never evaluated for compliance impact. Manual tracking with spreadsheets and calendar reminders breaks down at scale. A single missed access review cycle invalidates CC6.1 evidence for the quarter. A logging configuration change that reduced retention below the required threshold goes unnoticed for months. Personnel turnover compounds the problem: the person who understood the evidence collection cadence leaves, and institutional knowledge leaves with them. Without automated freshness tracking and drift detection, the gap between actual posture and assumed posture widens silently until the next auditor arrives.

Rampart maintains scoring continuously between audit cycles, providing an accurate real-time view of posture status at any moment. The three-dimensional scoring (defense effectiveness, evidence coverage, evidence freshness) updates as Sentinel ingests new data, as evidence ages, and as infrastructure changes. Degradation trends become visible before they become audit findings. A control point that scored well during the last audit but has declining evidence freshness and a recent drift event surfaces in the dashboard weeks or months before the next auditor arrives. Artificer monitors posture trends across all dimensions and surfaces degradation patterns proactively: controls approaching evidence expiration, infrastructure changes that affected previously satisfied criteria, periodic processes approaching their scheduled execution date. The platform converges your systems toward the declared desired state continuously, without waiting for the next audit cycle to begin. When your next Type II observation period starts, it starts from a position of demonstrated continuous compliance with a complete evidence history. Not a cold start. Not a scramble. A continuation of the posture you have maintained since the last report was issued.

10
Cross-Framework
One Security Posture. Every Derived Framework Computed Simultaneously.

The derivation chain between compliance frameworks is structural, not approximate. SOC 2 Trust Service Criteria map to NIST 800-53 controls through published cross-walks maintained by the AICPA. SOC 2 CC6.1 (Logical and Physical Access Controls) maps to NIST 800-53 AC-2 (Account Management). The same AC-2 control is the foundation for CMMC practice AC.L2-3.1.1, NIST 800-171 requirement 3.1.1, and FedRAMP Moderate's access control baseline. ISO 27001:2022 Annex A control A.8.2 (Privileged Access Rights) traces to the same underlying security requirement through NIST-published mappings via the Cybersecurity Framework. These relationships are deterministic and auditable. Work done for SOC 2 simultaneously satisfies controls in every framework that traces back to the same NIST lineage. An organization that achieves a clean SOC 2 Type II report has already satisfied a meaningful percentage of the controls required for ISO 27001 certification, FedRAMP authorization, and CMMC Level 2. The investment in SOC 2 compliance is not a single-use expenditure. It compounds across the entire compliance portfolio.

A concrete example demonstrates how the derivation chain works in practice. SOC 2 CC8.1 (Changes to Infrastructure and Software) requires that changes are authorized, tested, and approved before deployment. The underlying security capability maps to NIST 800-53 CM-3 (Configuration Change Control). In CMMC, the same capability satisfies CM.L2-3.4.3 (System Change Management). In FedRAMP Moderate, CM-3 applies with FedRAMP-specific parameter requirements for change documentation, approval evidence, and rollback procedures. In ISO 27001:2022, it maps to A.8.32 (Change Management). One SOC 2 control examined. One set of defenses implemented. One evidence chain collected. Five frameworks advanced. The cross-walk is deterministic at every link: from the SOC 2 criterion to the NIST 800-53 control to the target framework's specific requirement. This is not approximate alignment based on subjective interpretation. It is structural derivation through published, authoritative mappings maintained by the framework authorities themselves.

Rampart maintains the cross-reference engine that resolves these derivation chains through five strategies: native control mapping (direct criterion-to-control relationships published by the framework authority), NIST 800-53 derivation chain tracing (following the path from any framework back through 800-53 to any other framework that derives from it), NIST CSF 2.0 bridging (using the Cybersecurity Framework's function/category/subcategory structure as an intermediary between frameworks that lack direct mappings), published cross-walks from authoritative sources (AICPA for SOC 2, ISO for 27001, NIST for all NIST publications), and AI-suggested mappings that require human confirmation before activation. As you satisfy SOC 2 criteria, Rampart computes your readiness percentage for every other framework in the catalog in the background. The computation is not a summary estimate. It resolves each individual control relationship through the derivation chain and accounts for framework-specific parameter differences (FedRAMP may require the same control as SOC 2 but with a different review frequency or evidence retention period). When you activate a new framework assessment, it arrives pre-populated from your existing SOC 2 work. The marginal effort to add each subsequent framework decreases because the control overlap compounds through the derivation chain. One security posture. Every framework computed.

Something is being forged.

The full platform is under active development. Reach out to learn more or get early access.