COSAiS. AI Security Controls Forged from 800-53.
COSAiS Overlay (Future)
The Community of Standards for AI Security is developing NIST 800-53 control overlays that address risks unique to AI systems. Model integrity, training data governance, inference pipeline security, and AI supply chain controls mapped directly onto the control catalog through ADD and MODIFY operations. COSAiS is not yet published. Organizations that build their NIST 800-53 and AI RMF posture now accumulate evidence and controls that will directly satisfy COSAiS requirements when they arrive.
COSAiS Overlay
AI-specific security controls. Mapped to the catalog you already assess.
Traditional NIST 800-53 baselines secure information systems. They do not address risks introduced by machine learning models, training pipelines, or autonomous inference engines. COSAiS extends the 800-53 control catalog with AI-specific requirements using the standard overlay model. The result is AI governance that integrates with your existing compliance posture, not a parallel program with separate evidence requirements.
The Community of Standards for AI Security (COSAiS) is a collaborative initiative developing security control overlays that map AI-specific requirements directly onto the NIST 800-53 control catalog. Unlike broad AI governance frameworks that establish principles and risk categories, COSAiS operates at the control level: each overlay entry specifies a concrete ADD or MODIFY operation against a specific 800-53 control, with defined parameters, assessment objectives, and evidence requirements. The working group includes participants from government agencies, defense organizations, industry, and academia who share a common recognition that AI systems deployed within regulated environments need security controls expressed in the language those environments already use. COSAiS does not invent a new compliance structure. It extends the structure that CMMC, FedRAMP, NIST 800-171, and dozens of other frameworks already derive from.
The initiative exists because AI systems introduce risk categories that the current 800-53 catalog was not designed to address. A machine learning model is not a traditional software artifact. Its behavior derives from training data and learned parameters, not deterministic code. Adversarial inputs can cause misclassification without exploiting any traditional vulnerability. Training data poisoning can compromise model integrity without breaching any network perimeter. Model theft through inference API probing can extract intellectual property without unauthorized access to storage. These attack vectors do not map cleanly to existing 800-53 controls. Access control (AC family) does not address model registry permissions. Audit logging (AU family) does not specify inference event capture. System integrity (SI family) does not cover adversarial robustness testing. COSAiS closes these gaps by extending existing controls with AI-specific parameters and introducing new controls where no existing control addresses the risk.
COSAiS is a companion to the NIST AI Risk Management Framework (AI 100-1) and the Generative AI Profile (AI 600-1). Where AI 100-1 establishes the Govern, Map, Measure, and Manage functions for AI risk management, and AI 600-1 identifies risks specific to generative AI systems, COSAiS translates those risk categories into the 800-53 control language that compliance programs consume. An organization assessing its AI posture against AI 100-1 identifies risks. An organization implementing COSAiS overlays implements the specific controls that mitigate those risks within the 800-53 framework it already maintains. The three documents form a coherent stack: AI 100-1 provides the risk management structure, AI 600-1 profiles generative AI risks, and COSAiS maps the resulting security requirements to actionable controls in the NIST catalog. Organizations that invest in any layer of this stack build readiness for the others.
The NIST overlay model provides two operations for extending a security baseline: ADD and MODIFY. An ADD operation introduces a control or control enhancement that does not appear in the base framework selection. A MODIFY operation changes the parameters, implementation guidance, or assessment objectives of a control already present in the baseline. COSAiS uses both operations extensively. For example, the base NIST 800-53 Moderate selection includes AC-2 (Account Management), which requires organizations to manage information system accounts. A COSAiS overlay would MODIFY AC-2 to require account management for model registries, training pipeline orchestration systems, and inference endpoint administration. The control remains AC-2. The overlay specifies what "account management" means when the system includes AI components. Similarly, COSAiS is expected to ADD entirely new controls for model provenance tracking, adversarial robustness testing, and training data lineage that have no equivalent in the current 800-53 catalog.
COSAiS is currently in development and has not yet been published as a final standard. The working group is defining which 800-53 controls require AI-specific modifications and what new controls must be introduced to address risks with no current analog. Expected coverage spans multiple control families: Supply Chain Risk Management (SR family) extended for model supply chain integrity, Access Control (AC family) extended for training pipeline and model registry permissions, System and Information Integrity (SI family) extended for adversarial input detection and model drift monitoring, and Audit and Accountability (AU family) extended for inference event logging and decision traceability. When published, COSAiS will provide the most direct integration path for organizations that already manage compliance through the 800-53 control catalog. The overlay format means adoption does not require learning a new framework structure or establishing a separate evidence collection program.
Overlay composition follows a deterministic stacking order defined by NIST SP 800-53B. The base framework selection establishes the foundation. Privacy overlays add controls for personally identifiable information. Sector overlays add domain-specific requirements. COSAiS then adds and modifies controls for AI-specific risks. When multiple overlays modify the same control, the most restrictive parameter applies. If the base framework requires access reviews every 90 days, and COSAiS requires access reviews for model registries every 30 days, the 30-day requirement governs for AI components while the 90-day requirement continues to apply to non-AI systems. This composability is structural, not aspirational. Organizations that already operate with multiple overlays (privacy, CNSSI 1253, CIS benchmarks) will integrate COSAiS using the same composition mechanics. The AI governance requirements slot into the existing overlay stack without conflict or duplication.
AI model security addresses risks that have no parallel in traditional information system security. A machine learning model is a mathematical artifact derived from training data. Its behavior cannot be fully predicted from inspection of its parameters, and its attack surface differs fundamentally from conventional software. Model integrity concerns whether the model in production is the same model that was validated and approved for deployment. Without cryptographic verification of model weights and architecture files, an attacker who gains write access to a model registry can substitute a compromised model that behaves normally on standard inputs but produces attacker-controlled outputs on specific trigger patterns. Traditional 800-53 controls for software integrity (SI-7) address code and configuration. They do not address the integrity of multi-gigabyte binary weight files that encode learned behavior. COSAiS is expected to extend SI-7 with model-specific integrity verification requirements.
Adversarial robustness addresses the susceptibility of AI models to inputs designed to cause misclassification, evasion, or extraction. An image classifier can be fooled by perturbations invisible to humans. A natural language model can be manipulated through carefully constructed prompts that bypass safety filters. A malware detection model can be evaded by adversarial modifications to malicious payloads. These attacks exploit the statistical nature of model decision boundaries, not traditional software vulnerabilities. Existing 800-53 controls for input validation (SI-10) and information system monitoring (SI-4) do not account for adversarial machine learning techniques. COSAiS is expected to introduce controls that require adversarial testing as part of model validation, monitoring for adversarial input patterns in production, and documentation of known model limitations and failure modes. These controls fill a gap that cannot be addressed by extending traditional security testing alone.
Model access control extends traditional access management to AI-specific resources. Model registries contain trained weights that represent significant intellectual property and operational capability. Training pipelines process sensitive data through compute-intensive workflows that require elevated permissions. Inference endpoints accept inputs and return outputs that may contain or reveal sensitive information. Fine-tuning interfaces allow modification of model behavior by users with appropriate access. Each of these resources requires access control policies that traditional AC-family controls do not specify. Who can read model weights? Who can modify a training pipeline configuration? Who can deploy a new model version to production? Who can access inference logs that contain user inputs? COSAiS is expected to MODIFY AC-2, AC-3, AC-6, and related controls to require explicit access management for these AI-specific resources, with role definitions, least-privilege enforcement, and access review cadences appropriate to the sensitivity of each resource type.
Training data provenance is the foundation of AI model trustworthiness. A model's behavior is a direct consequence of the data it was trained on. If the provenance of that data is unknown, the trustworthiness of the model's outputs cannot be established. Organizations deploying AI systems within regulated environments need to document where training data originated, what transformations were applied, what quality controls governed its selection, and what review processes verified its appropriateness for the intended use case. Traditional 800-53 controls for media protection (MP family) and information handling address data storage and transmission. They do not address the lineage chain from raw data collection through preprocessing, augmentation, and feature engineering to the final training dataset. COSAiS is expected to ADD controls that require documented training data provenance, including source identification, collection methodology, licensing and usage rights verification, and chain-of-custody records from acquisition through model training.
Data poisoning is an attack against the training pipeline itself. An adversary who can inject, modify, or selectively remove training examples can influence model behavior in ways that persist through deployment and are difficult to detect through standard testing. A poisoned model may perform normally on evaluation benchmarks while producing attacker-controlled outputs on inputs that contain a specific trigger pattern. This attack vector is particularly concerning for organizations that use third-party datasets, crowdsourced labels, or web-scraped training corpora where data integrity cannot be independently verified. Existing 800-53 controls for information system integrity (SI family) address unauthorized modifications to system components. They do not address statistical manipulation of training data that produces models with embedded backdoors. COSAiS is expected to introduce controls that require training data integrity verification, anomaly detection in training pipelines, and validation procedures that test for poisoning indicators before model deployment.
Bias in training data creates models that produce discriminatory, inaccurate, or harmful outputs for specific populations or input categories. This risk extends beyond fairness concerns into operational security: a biased threat detection model may systematically fail to identify certain attack patterns. A biased identity verification system may produce unacceptable error rates for specific demographic groups, creating both security vulnerabilities and regulatory exposure. Training data governance must include procedures for evaluating dataset representativeness, identifying and mitigating sources of systematic bias, documenting known limitations, and establishing monitoring for biased outputs in production. COSAiS is expected to address bias through controls that require documentation of dataset composition, evaluation against defined fairness metrics, and ongoing monitoring of model outputs for indicators of systematic bias. These controls connect training data governance to the broader AI supply chain: organizations that consume third-party models or datasets inherit the bias characteristics of those components and need controls to evaluate and manage that inherited risk.
The inference pipeline is the runtime attack surface of an AI system. Every request to an AI model passes through a chain of components: input preprocessing, model execution, output post-processing, and response delivery. Each component introduces security considerations that traditional application security controls do not fully address. Input validation for AI systems must account for adversarial inputs designed to exploit model decision boundaries, prompt injection attacks that manipulate model behavior through crafted text, and data format attacks that target preprocessing vulnerabilities. Standard 800-53 controls for input validation (SI-10) specify that information systems check the validity of inputs. They do not specify validation strategies for inputs that are syntactically valid but semantically adversarial. COSAiS is expected to MODIFY SI-10 to require AI-specific input validation that addresses adversarial perturbations, prompt injection patterns, and inputs designed to extract training data or model parameters through carefully constructed queries.
Output filtering addresses the risk that AI models produce harmful, inaccurate, or sensitive content that should not reach the end user or downstream system. A generative model may produce content that violates organizational policies, reveals information memorized from training data, or contains instructions for harmful activities. A classification model may produce high-confidence predictions on out-of-distribution inputs that should be flagged for human review rather than acted upon automatically. Output filtering must enforce policies appropriate to the deployment context: content safety filters for user-facing applications, confidence thresholds for decision-support systems, and redaction mechanisms for outputs that may contain sensitive information derived from training data. Traditional 800-53 controls for information output handling (SI-12) and media sanitization (MP-6) do not address the probabilistic nature of AI outputs or the need for content-aware filtering at inference time. COSAiS is expected to introduce controls for output policy enforcement, confidence-based routing, and automated detection of outputs that violate defined content boundaries.
Resource management for AI inference addresses computational requirements that differ significantly from traditional application workloads. AI models, particularly large language models and deep neural networks, consume substantial GPU memory, compute cycles, and network bandwidth during inference. An adversary who can submit computationally expensive queries (long context windows, complex generation tasks, repeated high-resolution image processing) can exhaust inference resources and deny service to legitimate users. This denial-of-service vector is specific to AI workloads and is not addressed by traditional rate limiting alone because the computational cost per request varies dramatically based on input characteristics. COSAiS is expected to address resource management through controls that require per-request resource budgeting, anomaly detection for computationally expensive query patterns, isolation between inference workloads of different sensitivity levels, and capacity planning that accounts for the burst characteristics of AI inference traffic. These controls protect the availability and performance of AI systems in production environments.
When COSAiS controls are published, scanning AI system components will require evaluation across technology layers that standard security scanning does not cover. Model serving endpoints will need assessment for authentication enforcement, input validation, rate limiting, and injection resistance. Application code that integrates AI models will require analysis for prompt injection vulnerabilities, insecure deserialization of model outputs, insufficient output sanitization, and missing error handling for model failures. Container images packaging inference services will need evaluation against STIG and CIS benchmark configurations with additional checks for model file integrity, dependency chain security, and GPU driver vulnerabilities. Infrastructure-as-code definitions for training pipelines and inference infrastructure will require scanning for misconfigurations: overly permissive IAM roles on training compute, unencrypted model artifact storage, public-facing inference endpoints without authentication, and missing network segmentation between training and production environments. Each scan result must map to the specific COSAiS control it satisfies to produce assessor-ready evidence.
Continuous monitoring of AI systems for COSAiS compliance will present challenges that differ from traditional infrastructure monitoring. AI systems change faster than conventional workloads: model serving endpoint configurations shift as teams tune performance, model versions are updated in production registries, and pipeline orchestration settings evolve with each training cycle. Each change carries potential compliance impact. Does the new configuration still satisfy access control and monitoring requirements? Was the model version update reviewed against model management controls? Evidence from AI-specific infrastructure components goes stale rapidly. Inference latency and error rates indicating model degradation, access logs for model registries and training data stores, configuration snapshots of pipeline orchestration systems, and resource utilization patterns all require frequent collection. Manual or periodic evidence gathering will leave gaps that assessors identify. Model behavior monitoring may require daily evidence refresh, while training data governance documentation may accept quarterly review cycles. Organizations that build this monitoring infrastructure now establish the operational foundation that COSAiS evidence collection will demand.
The evidence chain for COSAiS requirements will follow the same immutable provenance model used across all overlays in the platform. Every evidence artifact carries a SHA-256 integrity hash, collection timestamp, source system identifier, and the specific control requirement it satisfies. Rampart will map collected evidence to COSAiS controls automatically: a Vanguard scan result showing that a model serving endpoint enforces mutual TLS will map to COSAiS modifications of SC-8 (Transmission Confidentiality and Integrity) with AI-specific parameters. A Sentinel configuration snapshot showing inference event logging will map to COSAiS modifications of AU-2 (Event Logging) with AI event categories. Garrison maintains the passive inventory of all AI system components: model registries, training compute resources, inference endpoints, and data stores containing training datasets. This inventory feeds Sentinel's monitoring scope and ensures every AI component within the system boundary is covered by the evidence collection program. Organizations building this inventory now establish the foundation that COSAiS evidence collection will require.
COSAiS does not exist in isolation. It extends the NIST 800-53 controls that serve as the foundation for CMMC, FedRAMP, NIST 800-171, and every other framework derived from the NIST control catalog. When a COSAiS overlay MODIFYs AC-2 (Account Management) to require access management for model registries and training pipeline credentials, that modification applies within the same AC-2 control that your existing assessments already evaluate. Evidence collected to satisfy the AI-modified AC-2 simultaneously strengthens your base AC-2 posture. When COSAiS ADDs a new control for model integrity verification, the implementation of that control involves capabilities (cryptographic hashing, access-controlled storage, change detection) that reinforce existing controls in the System and Communications Protection (SC) and System and Information Integrity (SI) families. The relationship is additive and reinforcing. COSAiS work does not create a separate compliance silo. It deepens the security posture your existing frameworks already measure.
COSAiS maps directly to the AI RMF functions that AI 100-1 and AI 600-1 define. The Govern function maps to COSAiS controls for AI governance policies, roles, and accountability structures. The Map function maps to controls for AI system characterization, risk identification, and context documentation. The Measure function maps to controls for AI performance monitoring, bias evaluation, and robustness testing. The Manage function maps to controls for risk treatment, incident response for AI failures, and continuous improvement processes. This alignment means that organizations assessing their posture against the AI RMF are identifying the same risks that COSAiS controls will mitigate. Work done under AI 100-1 translates directly to COSAiS readiness. Evidence collected for AI 600-1 generative AI risk categories maps to the COSAiS controls that will address those same categories through the 800-53 structure. The investment compounds across the entire AI governance stack.
The cross-framework benefits extend to CMMC, FedRAMP, and future EU AI Act obligations. A defense contractor building CMMC Level 2 compliance who also deploys AI systems will eventually need COSAiS overlays on their 800-53 baseline. Every 800-53 control satisfied today carries forward when the COSAiS overlay is activated. A cloud service provider pursuing FedRAMP authorization who integrates AI capabilities will need to demonstrate AI-specific security controls to their authorizing official. COSAiS provides those controls in the 800-53 format that FedRAMP already consumes. Organizations preparing for EU AI Act compliance will find that COSAiS controls address many of the same technical requirements: risk management, data governance, logging, transparency, and human oversight. Rampart is designed to integrate COSAiS overlays as soon as they are published. The overlay composition engine already supports ADD and MODIFY operations on any 800-53 baseline. Citadel will display the cross-framework impact of each COSAiS gap, prioritizing remediation actions that deliver the greatest benefit across your entire framework portfolio. The preparation you do now compounds when COSAiS is finalized.
Something is being forged.
The full platform is under active development. Reach out to learn more or get early access.