The 18-Month ATO. A Process Failure, Not a Security Requirement.
Authorization to Operate
The Risk Management Framework defines six steps. None of them require 18 months. The timeline balloons because evidence goes stale during the process, forcing teams to re-collect what they already gathered. Living authorization packages built from continuous evidence reduce ATO timelines from years to months.
ATO Bottleneck
Authorization timelines are a process problem. Not a security problem.
Federal systems require an Authority to Operate before they can process government data. The RMF process that produces that authorization is well defined: categorize, select, implement, assess, authorize, monitor. Organizations that follow this process with manual evidence collection and static documentation consistently measure ATO timelines in years, not months. The bottleneck is not security rigor. It is the gap between how fast infrastructure changes and how slowly documentation keeps up.
The Risk Management Framework (RMF) as defined in NIST 800-37 rev2 prescribes six steps: Categorize the information system based on the impact of a breach. Select a baseline of security controls from NIST 800-53 rev5 appropriate to that categorization. Implement the selected controls across the system's infrastructure, applications, and operational processes. Assess the implementation to verify each control operates as intended. Authorize the system by having a senior official accept the residual risk documented in the assessment. Monitor the system continuously to ensure the authorized security posture persists. Each step has clear inputs and outputs. None of them inherently requires months of calendar time. The categorization step takes days when the system's data types and mission impact are well understood. Control selection is deterministic once the categorization is established. Implementation timelines depend on the system's maturity, but the assessment and authorization steps should be measured in weeks, not quarters.
The timeline expands because of what happens between the steps. Evidence assembly is the primary bottleneck. For a moderate-impact system, the NIST 800-53 rev5 baseline includes over 300 controls. Each control requires evidence that the control is implemented, operating effectively, and producing the intended security outcome. That evidence must be collected from the running system, organized by control family, cross-referenced to the System Security Plan narrative, and packaged for the assessor. In a manual process, engineers take screenshots of configurations, export log samples, compile access control lists, and write narrative descriptions of how each control is satisfied. This collection cycle takes months. By the time the last control family is documented, the evidence collected for the first family is already stale. The infrastructure has changed. New resources have been deployed. Security group rules have been modified. Access control policies have been updated. The evidence package describes a system that no longer matches the running environment.
Scheduling compounds the delay. Independent assessors have limited availability. Authorizing Officials schedule authorization decisions around competing priorities. The assessment phase itself spans weeks as assessors review documentation, interview system administrators, and validate evidence against the running environment. When the assessor identifies discrepancies between the documented controls and the actual implementation, the assessment pauses while the system owner remediates gaps and re-collects evidence. Each remediation cycle adds weeks. Each re-collection cycle risks introducing new staleness into previously verified evidence. The calendar stretches from the planned six months to twelve, then to eighteen. The authorization decision, when it finally arrives, certifies a security posture that may have changed multiple times since the assessment began. The ATO is granted for a snapshot that existed briefly during the assessment window. It may not reflect the system's current state on the day the authorization letter is signed.
Evidence decay is the structural flaw in manual authorization processes. A configuration snapshot collected on January 15 documents the system's state on that date. By February 15, the infrastructure has changed. New compute instances have been provisioned. Storage encryption settings have been modified to accommodate a new data classification requirement. Network access control lists have been updated to support a new integration. IAM roles have been created for a new operations team member. None of these changes invalidated the January evidence at the time they occurred. But collectively, they mean the January evidence no longer describes the February system. The assessor reviewing the package in March is evaluating artifacts that describe a system from two months ago. When they compare those artifacts to the current environment during validation interviews, discrepancies surface. The assessment stalls while the system owner reconciles the documentation with reality.
The ATO factory emerged as organizations tried to scale this broken process. Dedicated compliance teams rotate through systems, applying the same manual evidence collection methodology to each one in sequence. The factory model treats authorization as a production line: scope system A, collect evidence for system A, submit system A for assessment, move to system B. The fundamental problem persists at scale. Evidence collected for system A goes stale while the team works on system B. When the assessor returns findings on system A, the compliance team is deep in system B's evidence collection. Context switching between systems introduces errors. Engineers who were interviewed about system A's access controls three months ago may not remember the specific configurations they described. The factory produces volume, but each unit suffers the same quality degradation. Twelve-month timelines per system multiply across a portfolio of ten systems. The organization finds itself in a perpetual authorization cycle where no system is ever fully current and every assessment begins with re-collection.
The cost of this cycle extends beyond labor hours. Contract performance periods begin regardless of authorization status. Systems that cannot obtain their ATO before the contract start date operate under interim authorizations, waivers, or risk acceptance memoranda that carry their own oversight burden. Mission-critical systems that need rapid deployment face a choice between waiting eighteen months for authorization or operating under elevated risk with reduced oversight. Neither outcome serves the mission. The authorization process was designed to manage risk, not to create it. When the process itself becomes the risk, because it cannot keep pace with the systems it governs, the framework is not at fault. The implementation methodology is at fault. The RMF does not prescribe manual evidence collection, static documentation, or quarterly review cycles. Those are implementation choices. Different implementation choices produce different timelines.
The System Security Plan is the authoritative document that describes a system's security architecture, control implementations, and authorization boundary. For a moderate-impact system assessed against the NIST 800-53 rev5 baseline, the SSP typically exceeds 300 pages. Each control family requires narrative descriptions of how the organization satisfies each control, what technical mechanisms enforce the control, what evidence demonstrates the control's effectiveness, and what residual risks remain. These narratives must be specific to the system, not generic boilerplate copied from a template. An assessor who reads the same access control narrative across five different systems in an organization's portfolio will flag the repetition as evidence that the descriptions do not reflect actual implementations. Each system has unique access control mechanisms, unique user populations, unique data flows, and unique technical architectures. The narratives must reflect those distinctions.
Manual narrative authorship is where documentation timelines expand most dramatically. A security engineer who understands the technical implementation of AC-2 (Account Management) for a specific system must translate that understanding into a written narrative that an assessor can evaluate. The engineer describes the identity provider configuration, the role-based access control model, the account provisioning workflow, the access review frequency, and the technical enforcement mechanisms. This narrative takes hours to write correctly for a single control. Multiply that across 300+ controls and the documentation effort consumes months of engineering time. The engineers writing these narratives are the same engineers maintaining the systems. Every hour spent writing documentation is an hour not spent on security operations, incident response, vulnerability management, or system enhancement. The documentation burden competes directly with the operational work that produces the security posture the documentation is supposed to describe. By the time the last narrative is written, the first narratives describe an implementation that has already changed. The SSP is outdated before the assessor receives it.
Artificer generates SSP narratives as parameterized computation from live Rampart assessment data. This is not template-based generation where boilerplate text is populated with variable substitution. Each narrativeGenerator in Artificer declares its requirements (which Rampart data it needs), its patterns (the narrative structure for this control family), and its quality criteria (what constitutes a sufficient narrative for this control type). Artificer consumes 9 dynamic context blocks that include the system description, the authorization boundary, the current control implementation status, the evidence chain, the organizational policies, the infrastructure topology from Garrison, and the scan results from Vanguard. The narrative for AC-2 is computed from the actual identity provider configuration, the actual role assignments, the actual access review records, and the actual provisioning workflow observed by Sentinel. When the implementation changes, the narrative updates. When new evidence is collected, the narrative reflects it. The SSP is a living computation, not a static document. Assessors receive narratives that describe the system as it exists today, backed by the evidence chain that proves each claim.
An evidence package for a moderate-impact system must demonstrate that each of the 300+ selected controls is implemented and operating effectively. Each control requires specific evidence types: configuration artifacts proving the control is deployed, operational records proving the control is functioning, and test results proving the control produces the intended security outcome. For AC-2 (Account Management), the package must include the account management policy, evidence of the provisioning workflow in operation, access review records showing periodic reviews were conducted and acted upon, deprovisioning records showing timely account removal, and audit logs demonstrating enforcement of account restrictions. That is one control. The full package spans every control family: Access Control, Awareness and Training, Audit and Accountability, Security Assessment, Configuration Management, Contingency Planning, Identification and Authentication, Incident Response, Maintenance, Media Protection, Physical Protection, Planning, Personnel Security, Risk Assessment, System and Services Acquisition, System and Communications Protection, and System and Information Integrity.
Manual assembly takes months because every evidence artifact requires human action. An engineer logs into a console, navigates to the relevant configuration page, takes a screenshot, saves it with a filename that identifies the control it supports, and uploads it to the evidence repository. The compliance team reviews the artifact, confirms it addresses the control requirement, and cross-references it in the evidence matrix. For a single control, this process takes 30 minutes to several hours depending on the complexity of the evidence required. Multiply that across 300+ controls and account for the coordination overhead of multiple engineers contributing artifacts across different timelines, and the assembly phase stretches to three to six months. During that period, every artifact already collected is aging. Evidence freshness is not a binary state. It degrades continuously from the moment of collection. An artifact collected on day one of a six-month assembly cycle is six months old when the package is submitted. The assessor will question whether that artifact still reflects the current environment.
Sentinel replaces manual evidence assembly with declarative evidence requirements resolved against discovered infrastructure. Each control in Rampart declares what evidence it needs: the evidence type, the source system, the freshness threshold, and the sufficiency criteria. Sentinel's unified collection engine discovers the connected infrastructure through Garrison and resolves each evidence requirement against the actual resources in scope. For AC-2, Sentinel identifies the identity provider, subscribes to its account lifecycle event stream, and collects provisioning, review, and deprovisioning events as they occur. Each evidence event is stored as an immutable record with a SHA-256 integrity hash and OpenTelemetry trace ID linking it to the source system, collection method, and exact timestamp. Evidence sufficiency is computed declaratively: Rampart evaluates whether the collected evidence meets the freshness threshold, covers the required evidence types, and satisfies the sufficiency criteria for each control. When evidence approaches its expiration threshold, Sentinel's evidence expiration lifecycle triggers re-collection automatically. The evidence package is always current because evidence flows continuously from the running infrastructure, not from periodic human collection cycles.
Preparing for an assessor requires more than collecting evidence. The evidence must be organized by control family, cross-referenced to the SSP narrative for each control, indexed so the assessor can navigate from any control to its supporting artifacts, and packaged in a format the assessor can review independently. Personnel must be identified for each control family interview. Operational processes must be documented for observation. The authorization boundary must be current and accurately reflected in the system description. POA&M items must be documented with remediation plans and timelines. The quality and completeness of this preparation determines the assessment's pace. Assessors who receive well-organized evidence packages complete their review in weeks. Assessors who receive disorganized artifacts spend weeks requesting clarification before the substantive review begins.
Disorganized evidence packages are the most common cause of extended assessment timelines. Assessors who receive artifacts scattered across shared drives, email threads, and ticket systems spend days reconstructing which evidence supports which control. Narratives that describe intended implementations rather than observed reality create immediate credibility problems: when the assessor interviews the system administrator and the answers do not match the written narrative, every subsequent narrative is treated with skepticism. Missing cross-references between controls and their supporting evidence force the assessor to request clarification for each gap, turning a three-week assessment into a six-week assessment. The cost of that delay falls entirely on the organization. Every hour the assessor spends hunting for evidence is an hour not spent validating the actual security posture. Extended assessments also increase the risk of evidence staleness during the assessment window itself, compounding the very problem the organization was trying to avoid.
Alliance grants the assessor time-bound, read-only access to Rampart. The assessor navigates every control, drills into the evidence chain for each, examines the scoring methodology, and downloads artifacts for their own records independently, without relying on the organization's team to pull artifacts on demand. Every action the assessor takes within Alliance is logged: what they viewed, what they downloaded, when they accessed it, and from which network location. This creates a chain of custody for the assessment itself. Evidence is organized by control family with full provenance: each artifact carries its SHA-256 integrity hash, the source system that produced it, the collection timestamp, and the Sentinel collection policy that governed its acquisition. OSCAL export produces machine-readable assessment packages in the format specified by NIST, enabling assessors who accept structured data submissions to ingest the evidence programmatically. The assessor receives verifiable proof from running systems, organized by control family, with immutable evidence chains and complete provenance metadata. Not a shared drive folder with ambiguous filenames.
Continuous authorization replaces the point-in-time snapshot with a living authorization package that reflects the system's current security posture at all times. The concept is not new. NIST 800-37 rev2 explicitly describes ongoing authorization as an alternative to the traditional three-year reauthorization cycle. FedRAMP has formalized continuous monitoring requirements that feed into ongoing authorization decisions. The Department of Defense's RMF implementation guidance acknowledges that continuous monitoring data should inform authorization decisions on an ongoing basis rather than exclusively at periodic reauthorization boundaries. The framework supports continuous authorization. The challenge has been implementing it. Manual processes cannot produce continuous evidence. Quarterly collection cycles cannot track daily infrastructure changes. Static SSP documents cannot reflect dynamic architectures.
The point-in-time paradox is structural. An ATO authorizes a system to operate based on the risk posture observed during the assessment. The assessment evaluates a specific configuration at a specific point in time. The authorization letter is signed days or weeks after the assessment concludes. Between the assessment and the signature, the system continues to operate. Patches are applied. Configurations are updated. Users are provisioned and deprovisioned. The system authorized on paper may differ from the system running in production on the day the authorization takes effect. A three-year ATO means the authorization decision made in 2024 governs the system through 2027. The system in 2027 will bear little resemblance to the system assessed in 2024. Cloud infrastructure will have been re-architected. Application code will have been rewritten. The operating system versions documented in the SSP will have reached end of life and been replaced. The paradox deepens with every change.
Rampart provides temporal posture queries: PostureFunction(system, framework, timestamp) returns the exact security posture of any system against any framework at any point in time. This is not a cached snapshot. It is a computed projection from the event-sourced evidence store. Every compliance event is recorded as an immutable record with its SHA-256 integrity hash and OpenTelemetry trace ID. The Authorizing Official can query the system's posture today, compare it to the posture at the time of the last authorization decision, and see exactly what changed, when it changed, and how each change affected control satisfaction. Sentinel runs continuous monitoring against the full control baseline, detecting configuration drift, evidence expiration, and posture degradation in real time. Garrison displays the live estate: every resource, every configuration, every relationship within the authorization boundary. When Sentinel detects a change that affects a control, Rampart re-evaluates that control immediately. The authorization package is never stale because it is never static. It is a continuous projection of the system's observed security posture.
The standard authorization cycle grants an ATO for three years, after which the system must be reauthorized. In theory, continuous monitoring during the authorization period should maintain awareness of the system's evolving risk posture so that reauthorization builds on established knowledge rather than starting from scratch. In practice, most organizations experience reauthorization as a cold start. The SSP was written three years ago and describes an architecture that no longer exists. The evidence package was assembled for the previous assessment and has been aging in a document repository. The infrastructure has changed so extensively that the previous authorization boundary may no longer be accurate. New systems have been integrated. Old systems have been decommissioned. Network topologies have been restructured. The compliance team that prepared the original assessment may have turned over. The reauthorization engagement requires re-discovering the system, re-documenting the architecture, re-collecting all evidence, and re-assessing every control. The three-year cycle does not save work. It defers work.
The cold-start problem is compounded by the annual assessment requirement that many agencies mandate between full reauthorization cycles. These annual assessments are supposed to verify that the security posture documented in the ATO package still reflects the running system. When the SSP has not been maintained, the annual assessment becomes a mini-reauthorization: the assessor discovers discrepancies between the documentation and the current environment, findings are generated, remediation is required, and the evidence package must be updated. Each annual assessment consumes months of preparation time because the documentation drifted from reality throughout the preceding year. The organization spends more total effort on periodic re-assessment than it would on continuous maintenance. The cycle repeats: document, drift, re-document, drift, re-document. Each iteration starts from a larger documentation gap than the previous one.
PostureFunction(system, framework, timestamp) provides continuous readiness that eliminates the cold start entirely. Because Rampart maintains the assessment as a continuous computation from live evidence, the reauthorization engagement begins with a current, verified posture assessment rather than a three-year-old document. The assessor can query the system's posture trajectory over the entire authorization period: how many controls were satisfied at each point in time, when degradations occurred and how they were remediated, and what the current posture is today. Citadel's action queue pre-computes the score impact of every open finding using the formula (score_impact x urgency) / estimated_effort, ensuring that the highest-value remediations are addressed first during the lead-up to reauthorization. The three-year cycle starts from demonstrated continuous compliance rather than from a cold re-discovery. Sentinel's continuous monitoring provides the evidentiary basis for ongoing authorization: every control has been monitored, every drift has been detected and addressed, and every evidence artifact carries its full provenance chain from collection through the current moment.
The 18-month ATO timeline decomposes into identifiable delays, and each delay has a structural cause that continuous evidence eliminates. Evidence collection accounts for the largest share: three to six months in a manual process, reduced to zero when evidence flows continuously from the running system. SSP authorship accounts for another two to four months: narrative descriptions written by engineers who are simultaneously maintaining the systems they describe. When the SSP is a living document updated automatically as the system evolves, the initial authorship effort decreases and the maintenance burden disappears. Evidence re-collection after assessor findings accounts for one to three months per remediation cycle: re-collecting stale artifacts, re-verifying configurations, re-interviewing engineers. When evidence is continuous and current, there is no re-collection cycle. Assessor scheduling and review time remains constant regardless of methodology, but assessors who receive structured, navigable evidence packages with clear control-to-evidence mappings complete their reviews faster.
The structural improvement is not incremental. It is categorical. The manual process requires sequential phases that cannot overlap: collect evidence, then write narratives, then review the package, then submit for assessment, then remediate findings, then re-collect, then resubmit. Each phase waits for the previous one to complete. The continuous model eliminates the sequential dependency because evidence collection, narrative maintenance, and posture assessment all happen concurrently as ongoing processes rather than discrete phases. The system owner is always assessment-ready because the authorization package is always current. Engaging the assessor does not require a preparation sprint. The package is ready today because it was ready yesterday and it will be ready tomorrow. The assessor's review begins immediately upon engagement because the evidence does not need to be assembled. It already exists in a structured, navigable format with complete provenance metadata and cryptographic integrity verification.
Redoubt Forge compresses the authorization timeline by eliminating the phases that consume calendar time without producing security outcomes. Sentinel's continuous discovery maintains the authorization boundary in real time, with its unified collection engine resolving declarative evidence requirements against the live infrastructure. Garrison's passive inventory eliminates the manual asset enumeration phase. Rampart's per-control scoring across defense effectiveness, evidence coverage, and evidence freshness gives the system owner a precise readiness assessment at any moment, with PostureFunction(system, framework, timestamp) providing posture at any point in the authorization lifecycle. Artificer generates and maintains control narratives as parameterized computation from live assessment data, with each narrativeGenerator declaring its requirements, patterns, and quality criteria across 9 dynamic context blocks. When the assessor engages, Alliance provides time-bound, read-only access to the complete authorization package with every control, every evidence chain, every narrative, and every finding navigable independently. Every event carries its SHA-256 integrity hash and OpenTelemetry trace. The living ATO package replaces the 18-month preparation sprint with a continuously maintained authorization state. From years to months.
Something is being forged.
The full platform is under active development. Reach out to learn more or get early access.