Contractors Should Know This On Artificial Intelligence in Defens


AI is now embedded in core defense mission systems, acquisition planning, and contract administration. The legal, compliance, and contractual risks that follow are fast-growing and consequential — capable of derailing performance, generating False Claims Act (FCA) exposure, or disqualifying proposals.

As the Department of Defense (DoD) increases its reliance on AI-enabled capabilities, contractors should understand that “AI contracting” is not a separate category of procurement. Instead, it is a convergence of existing defense procurement rules — cybersecurity, data rights, export controls, supply chain security, and ethics — applied to a technology that is notoriously difficult to define, audit, and control. DoD’s AI and Data Acceleration (ADA) initiative, the Chief Digital and Artificial Intelligence Office’s (CDAO) Responsible AI governance frameworks, and the updated DoD Directive 3000.09 on autonomous weapons are already shaping contract requirements — whether or not specific AI clauses appear in a given solicitation.

Importantly, the risk profile differs significantly depending on whether a contractor is developing AI systems for DoD or using AI tools internally in the performance of non-AI contracts. Both categories face exposure, but the applicable rules, documentation obligations, and disclosure risks are materially distinct. This article addresses both.

AI Is Becoming a Procurement Requirement — Not a Differentiator

Historically, contractors marketed AI as a competitive advantage. Increasingly, DoD solicitations treat AI-related functionality as table stakes, particularly in areas such as Intelligence, Surveillance, and Reconnaissance (ISR); predictive maintenance; autonomous platforms; decision support tools; and cyber defense.

But as AI becomes routine, contracting officers and evaluators are asking more sophisticated questions, including:

  • What data will the model be trained on?
  • Is the model explainable, auditable, or testable?
  • How is bias managed and documented?
  • How will the contractor protect training data and outputs?
  • Can the contractor guarantee the system will not “hallucinate” in mission-critical contexts?
  • What happens when the AI model changes post-award — through retraining, fine-tuning, or version updates — and do such changes constitute an out-of-scope modification, a data rights event, or simply acceptable software maintenance?

These questions are not merely technical. They directly implicate contract performance risk, warranty exposure, and compliance obligations.

AI Raises Unique Risks Under Traditional Procurement Rules

Even though AI is relatively new, the legal frameworks governing defense procurement are not. Contractors should assume that existing rules will be applied aggressively to AI development and deployment — often in ways that create traps for the unwary.

False Claims Act Exposure from Overpromising AI Capabilities

AI marketing often relies on broad statements about performance, automation, accuracy, and autonomy. In the commercial world, puffery is common. In defense contracting, however, capability representations in proposals, white papers, and progress reports can become the basis for FCA allegations — especially when payment depends on performance milestones.

Common FCA risk areas include:

  • Overstating model accuracy or readiness levels;
  • Implying the model has been validated in environments where it has not;
  • Failing to disclose known limitations or reliability issues;
  • Misrepresenting the origin or quality of training data; and
  • Billing for “AI development” work that is largely manual.

The last item warrants emphasis. A contractor that uses a commercially licensed foundation model, represents it as “purpose-built” or “proprietary,” and bills for development hours it did not expend faces live FCA exposure. This is not a hypothetical — it is a pattern DOJ has pursued under the Civil Cyber-Fraud Initiative, which has been extended in practice to cover AI-related misrepresentations in government contracts. DOJ’s FCA enforcement posture in the AI space is active and expanding.

Contractors should treat AI-related proposal language as high-risk and subject it to the same rigor applied to cost or pricing representations.

Data Rights Battles Are Increasing

AI systems are built on data. But in DoD contracting, ownership and licensing of data is rarely straightforward.

Key questions include:

  • Who owns the training data?
  • Who owns the fine-tuned model weights?
  • Does the government receive government purpose rights, unlimited rights, or limited rights?
  • Is the AI model “developed exclusively with government funds” or mixed funding?
  • Is the contractor relying on third-party datasets subject to restrictive licenses?

These issues can become existential for contractors whose business models depend on reusing models or training pipelines across customers. If data rights are mishandled, contractors may find that they have unintentionally given the government broad rights — or, conversely, that they cannot legally deliver what the contract requires.

The DFARS 252.227-7013 and 7014 framework was not designed for AI. The statutory categories of “computer software” and “technical data” apply with difficulty — and sometimes not at all — to model weights, embeddings, and outputs generated through reinforcement learning from human feedback. Contractors should not assume that their existing data rights posture maps cleanly onto an AI development effort.

Two traps deserve particular attention. First, contractors that use the same base model across both commercial and government work face “mixed funding” ambiguity that can inadvertently result in unlimited rights assertions by the government — even where the contractor’s investment in the base model is substantial. Second, the government may assert that fine-tuned model weights were “developed under contract” even where the underlying base model was entirely commercial, on the theory that the fine-tuning process was government-funded.

There is no clear regulatory answer to either question, which is precisely why early, affirmative data rights planning is essential.

Cybersecurity and Controlled Unclassified Information (CUI)

Most defense contractors are already familiar with DFARS cybersecurity obligations, and the requirements tied to CUI. AI increases the stakes because it multiplies the volume and sensitivity of stored data.

AI programs frequently involve:

  • Ingesting operational datasets;
  • Storing large quantities of CUI;
  • Generating derived datasets and outputs that may themselves be sensitive; and
  • Collaborating with subcontractors and cloud providers.

This creates an expanded attack surface and additional compliance risk. Contractors should anticipate increased scrutiny of their cloud architecture, access controls, incident reporting practices, and supply chain vetting — particularly where model training occurs outside secure government environments.

Two specific issues deserve attention. First, CMMC 2.0 Level 2 and Level 3 requirements now apply to many AI development environments where training data contains CUI. Contractors building or fine tuning models in commercial cloud environments — including AWS GovCloud or Azure Government — should understand that the cloud enclave itself may need to be scoped within their CMMC assessment boundary, not simply the endpoint systems accessing it.

Second, AI-generated outputs present a CUI propagation problem that few contractors have operationalized. Where a model is trained on CUI, its outputs — including summarizations, pattern extractions, and derived analyses — may themselves constitute CUI requiring the same marking, handling, and protection obligations as the underlying data. The absence of policy guidance on this point does not eliminate the obligation.

Looking ahead, Section 1513 of the FY2026 NDAA now requires DoD to develop a comprehensive cybersecurity and physical security framework specifically for AI/ML technologies, and to amend the DFARS to mandate contractor compliance with that framework once finalized. The framework must address workforce risks, supply chain risks, adversarial attacks, data poisoning, and unauthorized access — and Congress has directed that it augment, not replace, the existing CMMC program.

The trajectory is clear: A CMMC-equivalent regime for AI security is being built. The origin story is identical — CMMC began with a provision in the FY2020 NDAA and took years to roll out. Contractors that waited until CMMC came into force found themselves in expensive remediation. The same mistake, made again with Section 1513, will be no less costly.

Export Controls and ITAR/EAR Complications

Many AI-enabled defense systems involve software, technical data, and dual-use technology that may be regulated under ITAR or the EAR. The legal risk is not limited to shipping hardware overseas; it includes cloud access, model sharing, foreign national participation, and even remote collaboration.

AI also complicates export compliance because models can embed sensitive capabilities learned from controlled training data. Contractors should not assume that “it’s just software” will avoid export restrictions. If the AI is integrated into defense articles or trained on controlled technical data, it may trigger ITAR compliance obligations.

Three specific issues are frequently overlooked. First, contractors should not discount EAR Category 4E001 and the Bureau of Industry and Security’s ongoing AI/ML-related rulemaking, which is expanding the scope of controlled technology to include certain AI software and model architectures.

Second, the “deemed export” risk for foreign nationals participating in AI development — particularly model training runs involving controlled technical data — is frequently underweighted in hiring and subcontracting decisions. A foreign national employee who otherwise holds appropriate clearances may still implicate deemed export obligations depending on the nature of the training data or architecture involved.

Third, the BIS AI Diffusion Rule, which restricts access to advanced AI computing infrastructure and model weights by foreign entities, has direct supply chain implications for contractors that rely on offshore compute resources or foreign teaming partners for AI development work.

Supply Chain and “Black Box” Vendor Risk

Defense contractors increasingly rely on third-party AI tools, commercial APIs, pretrained models, and external datasets. This reliance introduces supply chain risk that extends beyond the traditional concern of counterfeit parts.

For example, contractors may unknowingly incorporate:

  • Open-source models trained on copyrighted or restricted datasets;
  • Foreign-developed code or dependencies that create national security concerns;
  • Vendor APIs that store or reuse government data; or
  • Models that cannot be audited or reproduced.

If a contractor cannot explain how a model was trained, cannot validate its integrity, or cannot confirm where data is stored, that contractor may be unable to meet government requirements — particularly in classified or high-assurance environments.

Contractors in certain sectors should also be aware that NDAA Section 818 supply chain provisions and DoD’s Trusted Capital Marketplace program extend affirmative AI vendor vetting obligations beyond discretionary due diligence.

Additionally, training data poisoning — the deliberate corruption of training datasets by adversarial actors — is not merely a technical risk. If a contractor’s AI system produces adversely skewed operational outputs due to a compromised upstream dataset, the contractor may face performance disputes, government cost disallowance, and potentially FCA exposure if the contamination was knowable and undisclosed.

One additional supply chain risk is now a matter of enacted law, not just best practice. Section 1532 of the FY2026 National Defense Authorization Act prohibits contractors from using “Covered AI” during performance of contracts with DoD or the U.S. Intelligence Community. While the prohibition was motivated by concerns about DeepSeek and its parent company High Flyer, the statutory definition of “Covered AI” is materially broader: It reaches any AI developed by a company domiciled in, or subject to ownership, control, or influence by, a “covered nation” — currently defined to include China, Russia, North Korea, and Iran — or by any entity on the Commerce Department’s Consolidated Screening List. Checking whether a vendor’s logo says “DeepSeek” is not adequate diligence. Contractors must trace the ownership and funding structure of every AI tool used in contract performance, including commercial APIs, open-source foundation models, and cloud-based inference services, to confirm they fall outside the prohibition. This is a compliance obligation that runs to subcontractors and suppliers at every tier.

AI Ethics Is Becoming a Contract Performance Issue

DoD has made clear that “responsible AI” is not just aspirational. Contractors should anticipate solicitations requiring AI governance plans, bias testing, traceability, human oversight, and documented risk controls.

These are not future requirements. DoD Directive 3000.09 (revised), the DoD Responsible AI Strategy and Implementation Pathway, and CDAO’s AI Test and Evaluation Framework are active governance instruments already shaping solicitation requirements and post-award oversight. Contractors developing systems with any degree of automated decision-making should be conversant with them.

One point of particular importance: “Human-in-the-loop” is not a uniform compliance standard. DoD formally distinguishes among human-in-the-loop, human-on-the-loop, and human-out-of-the-loop systems, and the applicable oversight standard varies by mission type, weapons category, and approval authority. Contractors should not assume that a single human oversight posture will satisfy requirements across all contract types, and should push for clarity in proposal language and contract terms on exactly which standard applies.

Even if not expressly stated, AI ethics issues can become performance issues when systems produce discriminatory outcomes, unreliable targeting recommendations, or unexplained operational failures. In the defense context, these failures can trigger investigations, contract disputes, negative past performance reviews, termination risk, and suspension/debarment concerns.

Proposals Must Address AI With Precision

Contractors often treat AI as a marketing centerpiece, but proposal teams frequently struggle to describe AI systems in a way that is technically accurate and legally defensible.

Best practices include:

  • Clearly defining what “AI” means in the proposed solution;
  • Avoiding vague claims about autonomy or decision-making authority;
  • Documenting testing and validation methods;
  • Identifying what data will be used and what rights the contractor has to use it;
  • Describing cybersecurity controls tied to model development and deployment; and
  • Identifying human-in-the-loop safeguards where appropriate.

Overbroad AI claims may score well initially, but they can become devastating later during performance, when the government expects delivery of capabilities that were never feasible.

The Regulatory Infrastructure Is Being Assembled Now

The absence of a single comprehensive AI clause across defense procurements should not be mistaken for an absence of regulatory momentum. OMB Memoranda M-25-21 and M-25-22 (April 2025) — which superseded the Biden-era M-24-10 — already require covered agencies to address data ownership and IP rights in AI procurements, prohibit vendors from using non-public government data to train AI models for any purpose outside the specific contract, and mandate documentation supporting transparency and explainability.

These requirements flow to contractors through agency implementations and are operative now for contracts awarded or renewed after October 1, 2025. The FAR Council’s AI-related rulemaking is active. DISA and NSA have issued AI security guidance that is being incorporated into contract requirements by reference. Contractors should expect requirements addressing:

  • Disclosure of AI usage in performance;
  • Restrictions on certain AI tools or foreign models;
  • Auditability and transparency requirements;
  • Training data provenance and documentation; and
  • Obligations to prevent unauthorized use of government data.

AI governance requirements are not converging toward cybersecurity compliance — they are following the same pattern. Contractors that waited until CMMC to build their cybersecurity infrastructure spent years in remediation. Those that wait for a single unified AI clause will face the same outcome. The infrastructure is being assembled now. Contractors should be building to it.

Practical Steps Contractors Should Take Now

Contractors working in defense AI should take proactive compliance steps now, before disputes or audits arise. These include:

  • Building internal policies on AI development, testing, and deployment;
  • Implementing contract review processes that flag AI-specific risk;
  • Tightening subcontractor and vendor due diligence for AI tools;
  • Developing defensible documentation practices for model training and validation;
  • Ensuring proposal claims can be supported with evidence;
  • Evaluating whether current cybersecurity posture is sufficient for AI-heavy environments;
  • Conducting an AI inventory of current contract performance to identify where AI is being used — including internally for proposal generation, cost estimating, and program management — and assessing whether any disclosure obligations attach;
  • Reviewing teaming and subcontract agreements for AI-specific IP allocation and data rights provisions since most standard templates and flowdown clauses do not address these issues, and the absence of negotiated terms typically favors the government in a dispute; and
  • Establishing a model change management protocol — covering versioning, retraining triggers, and drift monitoring — with documentation sufficient to demonstrate, in any future audit or dispute, that model updates during contract performance remained within scope and did not affect represented capabilities.

Conclusion

AI is reshaping defense contracting at a pace faster than many acquisition rules can adapt. But contractors should not mistake the absence of uniform AI clauses for a lack of enforcement risk. The government is already applying traditional procurement tools — cybersecurity requirements, data rights clauses, performance standards, and fraud statutes — to AI systems in ways that create significant exposure.

Defense contractors that treat AI as both a technical and legal discipline will be best positioned to win work, perform successfully, and avoid the costly disputes and investigations that can follow when AI capabilities are oversold or poorly controlled.

Listen to this post here. 



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *