The Risk Management Framework. Authorization Forged from Posture.
RMF Authorization Lifecycle
NIST SP 800-37 Rev 2. Seven steps: Prepare, Categorize, Select, Implement, Assess, Authorize, Monitor. The structured process for integrating security and risk management into the system development lifecycle. Mandatory for federal agencies under FISMA. Used by DoD, the intelligence community, and every organization that serves them. Event-sourced evidence from running infrastructure. Continuous authorization from continuous posture.
RMF Authorization
Authorization decisions backed by evidence from running systems. Not binders.
The Risk Management Framework defines the authorization lifecycle for every federal information system. Most organizations treat it as a documentation exercise: write the System Security Plan, collect evidence once, submit the package, wait for the Authorizing Official's signature. Redoubt Forge treats it as a continuous process. Start with actual defenses. The platform observes your security posture, maps controls to your categorization baseline, computes assessment scores from live data, and maintains the evidence chain that your Authorizing Official and assessors require.
The Risk Management Framework (RMF) is the structured process for integrating security, privacy, and cyber supply chain risk management into the system development lifecycle. Published by NIST as Special Publication 800-37 Rev 2, RMF defines seven steps that every federal information system must complete before receiving an authorization to operate. It is mandatory for all federal agencies under the Federal Information Security Modernization Act (FISMA) of 2014. The Department of Defense, the intelligence community, and every contractor or service provider operating systems on behalf of these organizations must follow RMF. The framework applies to general-purpose computing systems, industrial control systems, weapons platforms, cloud environments, and everything in between. RMF is not optional, not advisory, and not a recommendation. It is the law. FISMA requires agency heads to implement information security programs consistent with NIST standards and guidelines, and RMF is the operational expression of that requirement. Every system that processes, stores, or transmits federal information must be authorized through the RMF process before it can operate.
RMF is not a control set. This distinction is fundamental. RMF is a process that uses control sets. The primary control catalog for RMF is NIST 800-53 rev5, which contains over 1,000 controls organized into 20 families. RMF tells you how to select, implement, assess, and monitor those controls. NIST 800-53 tells you what the controls are. Confusing the two leads to organizations that implement controls without a governing process, or that follow a process without implementing the right controls. Other frameworks relate to RMF in specific ways. CMMC Level 2 uses NIST 800-171, which is itself derived from the NIST 800-53 Moderate baseline. FedRAMP adds FedRAMP-specific requirements on top of the RMF process and NIST 800-53 baselines. FISMA is the law that mandates the use of RMF. These are not competing frameworks. They are layers of the same architecture: FISMA mandates, RMF governs, NIST 800-53 defines, and derivative frameworks like FedRAMP and CMMC apply those controls to specific contexts.
Rev 2 of NIST 800-37, published in December 2018, introduced significant changes. The most notable is the addition of Step 0: Prepare, which establishes organization-level and system-level prerequisites before categorization begins. Rev 2 also emphasized reciprocity: the principle that an authorization decision made by one agency or organization should be accepted by others when the security requirements are equivalent. This reduces the burden of re-authorization when systems move between agency boundaries or when shared services support multiple organizations. The revision replaced the older Certification and Accreditation (C&A) process that had governed federal systems since the early 2000s. C&A treated authorization as a one-time event; RMF treats it as a lifecycle. The shift from C&A to RMF reflected a recognition that point-in-time authorization cannot keep pace with the rate of change in modern computing environments. Systems change continuously. The authorization process must account for that change, not ignore it until re-authorization.
The traditional Authority to Operate (ATO) process takes 12 to 18 months from initiation to authorization decision. That timeline is not a scheduling problem. It is a structural consequence of treating authorization as a documentation project. Organizations assign a compliance team to write the System Security Plan (SSP). The team interviews system owners and engineers to understand what the system does, how it is configured, and what controls are in place. Those interviews happen over weeks. The resulting narratives describe the system as it existed during the interview period. Engineers then collect evidence: screenshots of configurations, exports of access control lists, network diagrams drawn from memory or outdated documentation tools. Evidence collection takes months because it depends on the availability of the engineers who operate the systems, and those engineers have operational responsibilities that take priority over compliance artifact production. By the time the SSP is written and the evidence package is assembled, the system has changed. New services have been deployed. IAM policies have been modified. Network rules have been updated. The authorization package describes a system that no longer exists in the form documented.
Point-in-time authorization is the core structural failure. The Authorizing Official (AO) signs an ATO based on a risk assessment that reflects the system's posture at a specific moment. The next day, the system begins to drift. Configuration changes accumulate. New vulnerabilities are discovered in deployed software. Personnel changes alter who has access and what they can do. The ATO letter remains valid, but the security posture it authorized no longer matches reality. FISMA requires continuous monitoring, but in practice many organizations treat the ATO as a three-year pass that requires minimal attention until re-authorization. The gap between the authorized posture and the actual posture widens every day. When the re-authorization cycle begins, the organization faces the same 12-to-18-month documentation effort. The SSP must be rewritten to reflect the current system. Evidence must be re-collected. The AO must make a new risk decision based on a new point-in-time snapshot that will also begin degrading immediately.
The "ATO factory" anti-pattern compounds these problems at scale. Large federal agencies and contractors operate dozens or hundreds of systems, each requiring its own authorization. Organizations stand up dedicated compliance teams whose sole function is producing authorization packages. These teams cycle through systems sequentially, spending months on each. By the time they finish the last system in the queue, the first system's authorization is approaching expiration and the cycle restarts. The compliance team becomes a bottleneck that constrains the organization's ability to deploy new systems, modify existing ones, or respond to emerging requirements. Mission owners learn to avoid the authorization process because engaging it means months of delay. Shadow IT proliferates as teams deploy systems outside the authorization boundary to avoid the bottleneck. The result is worse security, not better: an organization with a thorough RMF process that takes so long to execute that people route around it. The authorization process intended to manage risk becomes a source of risk because it incentivizes avoidance.
The Prepare step establishes the organizational and system-level context required for all subsequent RMF activities. At the organization level, this includes defining risk management roles (Authorizing Official, System Owner, Information System Security Officer, Security Control Assessor), conducting an organization-wide risk assessment, identifying common controls that apply across multiple systems, and establishing the organization's risk tolerance and authorization strategy. At the system level, preparation involves defining the authorization boundary, identifying stakeholders, performing an initial risk assessment specific to the system, and determining which common controls the system will inherit from the organization. The Prepare step was added in Rev 2 specifically because organizations were skipping directly to categorization without establishing these prerequisites. The consequence was repeated, redundant work: every system team independently defined roles, assessed organizational risk, and implemented controls that should have been defined once and inherited.
Common control inheritance is the primary value of thorough preparation. A common control is a security control that is implemented once and inherited by multiple systems. Centralized identity providers, organization-wide security awareness training programs, physical security controls for data centers, incident response procedures, and personnel security policies are all examples of common controls. When an organization identifies and documents these controls at the Prepare step, every system that inherits them starts the RMF process with a significant portion of its control baseline already satisfied. Organizations that skip preparation force every system team to independently address controls that should be common. Access control policies are written separately for each system. Security awareness training is documented independently in each SSP. Incident response procedures are described differently in every authorization package. The result is inconsistency, redundancy, and wasted effort. Preparation is not overhead. It is the investment that reduces the marginal cost of every subsequent authorization.
Sentinel discovers the environment by connecting to your infrastructure accounts and enumerating every resource, configuration, network path, and service dependency. This discovery data provides the factual foundation for the Prepare step: what assets exist, how they are configured, and how they relate to each other. Rampart captures organization-level policies, common control definitions, and risk management role assignments. Artificer guides the preparation process by asking targeted questions about organizational structure, risk tolerance, authorization strategy, and common control candidates. Artificer identifies controls that appear in multiple system assessments and recommends promoting them to common controls. When a new system enters the RMF process, it inherits organization-level common controls from Rampart automatically. The preparation work compounds: the first system requires the most effort because it establishes the organizational foundation. Each subsequent system inherits that foundation and focuses only on system-specific controls. The marginal cost of authorization decreases with every system added.
System categorization determines the security baseline. It is governed by FIPS 199 (Standards for Security Categorization of Federal Information and Information Systems) and NIST SP 800-60 (Guide for Mapping Types of Information and Information Systems to Security Categories). The process requires identifying every information type the system processes, stores, or transmits, then assigning impact levels for three security objectives: Confidentiality, Integrity, and Availability. Each objective receives a rating of Low, Moderate, or High. The system's overall categorization is the high-water mark across all information types and all three objectives. A system that processes one information type at Moderate confidentiality and another at High integrity receives a categorization that reflects High for integrity. That categorization determines which NIST 800-53 baseline the system must implement: Low, Moderate, or High. The difference is substantial. The Low baseline includes approximately 130 controls. The Moderate baseline includes approximately 325 controls. The High baseline includes approximately 420 controls.
Categorization errors propagate through the entire RMF lifecycle. Over-categorization occurs when organizations default everything to High because it feels safer. The consequence is implementing and assessing hundreds of controls that are not warranted by the actual risk profile, consuming resources that could be directed toward genuine security improvements. A system that legitimately categorizes as Moderate but is treated as High must implement roughly 100 additional controls, each requiring implementation, documentation, evidence collection, and assessment. Under-categorization is more dangerous. It leaves the system with an inadequate control baseline, creating security gaps that the authorization process was designed to prevent. A system handling personally identifiable information that categorizes at Low instead of Moderate misses critical controls for encryption, access management, and audit logging. Common mistakes include failing to identify all information types (overlooking PII embedded in log files, for example), not considering information types from connected external systems, and categorizing based on the system's function rather than the data it handles.
Rampart structures the categorization process around FIPS 199 and NIST 800-60. Artificer guides system owners through categorization by asking targeted questions about information types: What data does this system process? Does it handle PII? Does it process financial information? What contracts or mission areas does it support? What external systems send data to or receive data from this system? Artificer adapts its questions based on previous answers and on what Sentinel has discovered about the system's data flows. When Sentinel identifies data patterns that suggest information types not captured in the categorization (network traffic to systems handling classified information, storage objects containing PII indicators, database schemas with financial data structures), the platform flags potential categorization gaps. The resulting categorization is documented with full rationale for each information type and impact level assignment. When the categorization is finalized, Rampart automatically derives the appropriate NIST 800-53 baseline and presents the full control set for the next step.
Control selection begins with the NIST 800-53 rev5 baseline determined by the system's categorization. The Low, Moderate, and High baselines are defined in NIST SP 800-53B and represent the minimum set of controls appropriate for each impact level. Selection is not a one-to-one adoption of the baseline. It requires tailoring: a deliberate process of adjusting the baseline to the system's specific context. Tailoring includes scoping considerations (removing controls that are not applicable to the system's technology or operational environment), selecting compensating controls (substituting equivalent protections when a baseline control cannot be implemented as specified), and assigning organization-defined parameters (filling in the variable values that many 800-53 controls leave to the implementing organization, such as "review access privileges [organization-defined frequency]"). Tailoring decisions must be documented and justified. An assessor will evaluate whether each tailoring decision is reasonable and whether the resulting control set provides adequate protection for the system's categorization level.
Overlays extend or modify the tailored baseline to address specific operational contexts. DISA Security Requirements Guides (SRGs) add DoD-specific requirements for general-purpose operating systems, application security, network devices, web servers, databases, and container platforms. DISA Security Technical Implementation Guides (STIGs) provide granular, implementation-specific configuration requirements derived from SRGs. CIS Benchmarks offer consensus-based hardening guidance for operating systems, cloud platforms, containers, databases, and web servers. Organizations operating in DoD environments typically apply multiple overlays: the baseline from NIST 800-53, SRG requirements for their technology stack, and STIGs for each specific product. The overlay stack determines the complete set of security requirements the system must satisfy. Each overlay adds controls, modifies existing controls, or tightens parameters. The layering is additive; overlays never reduce the baseline.
Rampart derives the baseline automatically from the system's categorization, presenting the full NIST 800-53 rev5 control set with all applicable parameters. Artificer recommends tailoring decisions based on the system's technology profile, operational environment, and what Sentinel has discovered about the deployed infrastructure. When the system runs on a specific operating system, Artificer recommends the corresponding STIG overlay. When the system operates in a DoD environment, Artificer recommends applicable SRGs. Tailoring decisions are captured in Rampart with full rationale: why a control was scoped out, what compensating control replaces it, what value was assigned to each organization-defined parameter, and which overlay requires each addition. The resulting control set is the complete, tailored, overlay-augmented baseline that governs implementation, assessment, and monitoring for the life of the authorization. Every tailoring decision is traceable from the final control set back through the overlay stack to the original NIST 800-53 baseline.
Implementation translates the selected control set into operational security measures deployed across the system. Controls span four categories. Technical controls are implemented by the system itself: access control enforcement, encryption, audit logging, intrusion detection, session management, and network segmentation. Operational controls are implemented by people and processes: incident response procedures, change management workflows, contingency planning, physical security measures, and personnel security screening. Management controls govern the security program: risk assessment processes, security planning, program management, and authorization procedures. Common controls inherited from the organization satisfy a portion of the baseline without system-specific implementation. The implementation step is where most of the actual security work happens. Writing an SSP narrative that says "the organization limits information system access to authorized users" is not implementation. Configuring the identity provider, deploying role-based access control policies, implementing multi-factor authentication, and establishing access review processes is implementation.
Documentation of the implementation is as important as the implementation itself. For each control, the System Security Plan must describe how the control is implemented, what technology or process satisfies the requirement, what the specific configuration parameters are, who is responsible for operating and maintaining the control, and what evidence demonstrates that the control is functioning as intended. Incomplete or generic documentation is a common assessment finding. An SSP that says "encryption is used to protect data in transit and at rest" without specifying the encryption algorithms, key lengths, key management procedures, certificate authorities, and specific system components where encryption is applied will not satisfy an assessor. The documentation must be specific enough that an assessor can verify the described implementation against the actual system. When the documentation says AES-256 encryption is applied to all storage volumes, the assessor will check every storage volume. Discrepancies between documentation and reality are findings.
Armory provides hardened infrastructure-as-code modules that implement technical controls from the first deployment. Encryption modules that satisfy SC-family controls by configuring storage-level and transit-level encryption with key management policies meeting NIST requirements. Logging modules that satisfy AU-family controls by deploying centralized log collection with tamper-evident storage, retention policies, and alerting on audit processing failures. Network segmentation modules that satisfy SC-7 (Boundary Protection) by establishing network boundaries with enforced traffic inspection and deny-by-default ingress rules. Vanguard scans the implementation against STIG and CIS Benchmark requirements, identifying configuration deviations before the assessment phase begins. Sentinel monitors the implementation continuously, detecting when deployed controls drift from their documented configuration. The infrastructure IS the evidence. Deploy the module, and the control is satisfied by design. Sentinel observes the deployed state and collects evidence that the control remains operational. Rampart captures implementation descriptions for each control, with Artificer generating SSP narratives from the observed infrastructure state rather than from manual interviews.
Assessment determines whether selected controls are implemented correctly, operating as intended, and producing the desired outcome with respect to meeting security and privacy requirements. The assessment begins with a Security Assessment Plan (SAP) that defines the assessment scope, methodology, assessment procedures for each control, and the schedule. The SAP specifies which controls will be assessed through examination (reviewing documentation, configurations, and artifacts), interview (discussing implementation with responsible personnel), and testing (actively exercising controls to verify their behavior). Most controls require a combination of all three methods. An access control that is documented in the SSP (examination), explained by the system administrator (interview), and verified through attempted unauthorized access (testing) receives a more thorough evaluation than one assessed through documentation review alone. The SAP is not a formality. It establishes the rigor and coverage of the assessment, and the resulting Security Assessment Report (SAR) is only as credible as the plan that governed the assessment.
The critical question for each control is not "is it implemented?" but "is it operating effectively?" A firewall rule that blocks unauthorized traffic is implemented. A firewall rule that blocks unauthorized traffic, generates alerts when violations are attempted, and feeds those alerts into an incident response workflow that produces documented responses is operating effectively. The distinction matters because controls that are implemented but not operating effectively create a false sense of security. They appear in the SSP, they show up in configuration exports, but they do not actually protect the system. Assessment must verify the complete chain: the control exists, it is configured correctly, it is producing the intended security behavior, and the organization can demonstrate that behavior through evidence. When a control is found to be not operating effectively, the assessor documents a finding in the SAR. Findings describe the gap, its potential impact, and the assessor's recommendation. The collection of findings in the SAR forms the basis for the authorization decision.
Rampart computes per-control assessment scores across three dimensions: defense effectiveness (is the control working in the environment), evidence coverage (what artifacts prove it), and evidence freshness (how current is the proof). These dimensions map directly to the assessment methods defined in NIST SP 800-53A: examination, interview, and testing. Rampart generates assessment packages organized by control family: all Access Control (AC) controls with their implementation narratives, evidence cross-references, and scoring status in one view. All Audit and Accountability (AU) controls in another. Artificer generates assessment narratives from observed infrastructure state, referencing specific components, configurations, and evidence artifacts rather than generic descriptions. When assessment reveals findings, Rampart captures them with severity, affected controls, and recommended remediation. The assessment is not a point-in-time event in the platform. It is a continuous computation that updates as new evidence arrives, as configurations change, and as the system evolves.
Authorization is a risk decision made by the Authorizing Official (AO): a senior leader with the authority to accept residual risk on behalf of the organization. The AO reviews the authorization package, which includes the System Security Plan, the Security Assessment Report, and the Plan of Action and Milestones (POA&M) for any open findings. Based on this review, the AO issues one of three decisions. An Authorization to Operate (ATO) means the system is authorized to operate with the identified residual risk accepted. A Denial of Authorization to Operate (DATO) means the risk is unacceptable and the system must not operate until remediation brings the risk to an acceptable level. An Interim Authorization to Test (IATT) grants limited authorization for testing or evaluation purposes under specific constraints and time limits. The ATO is not a rubber stamp. It is a personal acceptance of risk by a named individual. The AO is accountable for that decision and its consequences.
The AO needs three things to make an informed authorization decision. First: a clear picture of the system's security posture, including which controls are satisfied, which are partially satisfied, and which have open findings. Second: evidence that satisfied controls are actually operating effectively, not just documented. Third: a credible POA&M for residual risk that identifies each open finding, the planned remediation, the responsible party, the resources allocated, and the target completion date. The POA&M is not a wish list. It is a commitment to close identified gaps within specific timeframes. An AO who authorizes a system with 15 open findings and a POA&M that lists "TBD" for remediation actions and completion dates is not managing risk. Organizations that present vague POA&Ms force the AO into an impossible position: accept undefined risk or deny authorization and delay the mission. The quality of the authorization package determines whether the AO can make a defensible decision.
The platform delivers the complete authorization package from live assessment data. Rampart produces the SSP with implementation narratives generated by Artificer from observed infrastructure state. The SAR is compiled from Rampart's continuous assessment scores, with findings documented, severity-rated, and linked to affected controls. The POA&M is maintained in Rampart with remediation plans, assigned owners, resource allocations, and target dates for each open finding. Alliance grants assessors and the AO time-bound, read-only access to the complete authorization package. The AO navigates controls, evidence chains, findings, and POA&M status independently. Every action within Alliance is logged, creating a chain of custody for the authorization review itself. OSCAL export produces machine-readable authorization packages in the format specified by NIST for organizations that accept structured data submissions. The AO receives verifiable proof from running systems, organized by control family, with immutable evidence chains and complete provenance. Not a binder assembled the week before the authorization decision.
The Monitor step is where RMF transitions from a project to a lifecycle. Post-authorization, the organization must continuously assess a subset of controls on an ongoing basis, maintain the currency of the authorization package as the system changes, report security status to the AO and other stakeholders, and manage changes to the system through a process that evaluates security impact. NIST SP 800-137 (Information Security Continuous Monitoring for Federal Information Systems and Organizations) provides the guidance for establishing a continuous monitoring program. The program must define monitoring frequencies for different control families, establish metrics and thresholds that trigger re-assessment or re-authorization, and maintain evidence freshness across the entire control baseline. Continuous monitoring is not continuous scanning. Scanning is one input. Monitoring encompasses configuration management, vulnerability management, incident response, change control, and ongoing assessment of control effectiveness. It is the mechanism that keeps the authorization decision relevant as the system evolves.
The gap between point-in-time ATO and continuous reality is where most organizations fail the Monitor step. Authorization packages are filed and not updated. SSP narratives describe the system as it was authorized, not as it currently operates. Evidence goes stale because no process exists to refresh it. Configuration changes accumulate without security impact analysis. Personnel changes alter access privileges without updating access control documentation. New services are deployed within the authorization boundary without updating the system description. Vulnerability scans produce findings that are not mapped to affected controls or tracked through remediation. The authorization package becomes a historical artifact rather than a living document. When the next assessment cycle arrives, or when an incident triggers an ad hoc review, the organization discovers that the authorized posture and the actual posture have diverged significantly. Re-authorization becomes a full restart rather than an incremental update, and the 12-to-18-month cycle begins again.
Sentinel maintains continuous posture monitoring across every connected infrastructure source. When a configuration changes on a monitored resource, Sentinel evaluates the security impact immediately: which controls does this resource support, and does the new configuration still satisfy them? Drift detection fires in real time. Evidence freshness automation prevents gaps from forming between assessment cycles: when evidence approaches its expiration threshold, Sentinel re-collects from continuous sources automatically. For evidence that requires human action (annual policy reviews, periodic access certifications, management authorization renewals), the platform escalates through notifications with increasing urgency. Rampart tracks ongoing authorization status, maintaining the SSP, SAR, and POA&M as living documents that reflect the current system state. When changes trigger a threshold that warrants AO review, Rampart surfaces the change with its security impact analysis. Artificer monitors posture trends and surfaces degradation before it becomes an assessment finding. The authorization package stays current. The next assessment starts from demonstrated continuous compliance, not a cold start.
RMF sits at the center of the federal compliance ecosystem because it is the process that governs how every other framework is applied. NIST 800-53 rev5 is the control catalog that RMF uses. It defines the controls; RMF defines how to select, implement, assess, authorize, and monitor them. FedRAMP is the RMF process applied to cloud service providers, with additional FedRAMP-specific requirements for continuous monitoring, incident response timelines, and evidence submission to the FedRAMP PMO. A FedRAMP authorization IS an RMF authorization with additional constraints. CMMC uses a different assessment methodology (C3PAO instead of government assessors) but the controls trace back to NIST 800-171, which derives from the NIST 800-53 Moderate baseline. An organization that completes the RMF process for a Moderate system has already satisfied the control foundation for CMMC Level 2. FISMA is the law that mandates the entire structure: it requires federal agencies to implement information security programs, and RMF is the operational process that satisfies that legal requirement.
Reciprocity is the principle that an authorization decision made by one organization should be accepted by others when the security requirements are equivalent. Rev 2 of NIST 800-37 emphasized reciprocity as a mechanism to reduce redundant authorization effort. In practice, reciprocity means that a cloud service provider with a FedRAMP ATO should not need to undergo a separate, equivalent authorization for each agency that uses it. An organization that achieves an ATO from one DoD component should be able to present that authorization to another component without starting from scratch. Reciprocity reduces the aggregate cost of authorization across the federal enterprise. It also incentivizes thorough, well-documented authorizations: a package that is clear, complete, and evidence-backed is more likely to be accepted by a reciprocal organization than one that requires extensive clarification. The quality of the original authorization package determines whether reciprocity delivers its intended efficiency.
Rampart maintains the cross-reference engine that resolves relationships between RMF and every connected framework through five mapping strategies. Native control mapping uses direct relationships published by the framework authority. NIST 800-53 derivation chain tracing follows the path from any framework back through 800-53 to any other framework that derives from it. NIST CSF 2.0 bridging uses the Cybersecurity Framework's function, category, and subcategory structure as an intermediary between frameworks that lack direct mappings. Published cross-walks from authoritative sources (AICPA for SOC 2, ISO for ISO 27001, NIST for all NIST publications) provide verified relationships. Artificer-suggested mappings identify potential relationships that require human confirmation before activation. As you satisfy controls through the RMF process, Rampart computes your readiness percentage for FedRAMP, CMMC, SOC 2, ISO 27001, and every other framework in the catalog. The computation resolves each individual control relationship through the derivation chain and accounts for framework-specific parameter differences. One RMF authorization. Every derived framework advanced. The marginal effort to add each subsequent framework decreases because the control overlap compounds through the shared NIST 800-53 foundation.
Something is being forged.
The full platform is under active development. Reach out to learn more or get early access.