Regulatory Framework

US AI Regulatory Landscape

Navigate the evolving US AI governance framework across federal agencies, executive orders, and state-level regulations. Stay compliant while maintaining competitive velocity.

Federal AI Governance

The US federal approach to AI regulation spans multiple agencies with overlapping jurisdictions. Here's what you need to know.

Executive Order on Safe, Secure, and Trustworthy AI (2023)

Establishes government-wide AI safety standards, requires agencies to implement AI governance frameworks, and directs NIST to develop AI risk management standards.

Key Requirements: AI risk assessments, transparency measures, workforce training, and compliance reporting

Applies to: Federal agencies and contractors handling sensitive data or critical systems

FTC AI Enforcement & Guidelines

The FTC actively enforces consumer protection laws against deceptive AI practices, including false claims about AI capabilities and inadequate data security.

Key Requirements: Truthful advertising, data minimization, security safeguards, and bias testing

Applies to: Consumer-facing AI products and services

NIST AI Risk Management Framework (AI RMF)

Voluntary framework for managing AI risks across the lifecycle. Increasingly referenced in federal contracts and becoming de facto standard for enterprise AI governance.

Key Requirements: Risk mapping, mitigation strategies, performance monitoring, and documentation

Applies to: Organizations seeking federal contracts or managing high-risk AI systems

EEOC AI & Discrimination Guidance

Enforces Title VII and ADA requirements for AI systems used in hiring, promotion, and employment decisions. Holds employers liable for discriminatory AI outcomes.

Key Requirements: Bias audits, validation studies, transparency, and human oversight

Applies to: HR technology, recruitment AI, and employment decision systems

State-Level AI Regulations

States are moving quickly to regulate AI. California, New York, and Texas lead with comprehensive frameworks.

California

SB 942 (Algorithmic Accountability)

Requires impact assessments for automated decision systems affecting civil rights.

AB 701 (AI Transparency)

Mandates disclosure when AI is used in hiring, housing, or credit decisions.

Status: Active/Proposed

New York

Local Law 144 (Bias Audit Law)

Requires annual bias audits for AI used in hiring decisions by employers and recruiters.

Proposed AI Bill of Rights

Framework for algorithmic transparency, accountability, and human review rights.

Status: Active/Proposed

Texas

HB 4 (AI Transparency)

Requires disclosure of AI use in government decision-making and public services.

Data Privacy & AI Security

Emerging focus on data protection standards for AI systems handling consumer data.

Status: Emerging

Industry-Specific AI Requirements

Healthcare (HIPAA + FDA)

  • FDA guidance on AI/ML-based medical devices
  • HIPAA compliance for patient data in AI systems
  • Validation and testing requirements

Financial Services (SEC + CFPB)

  • SEC guidance on AI in investment management
  • CFPB fair lending requirements for credit AI
  • Model risk management frameworks

Government Contractors (NIST + DoD)

  • NIST AI RMF compliance for federal contracts
  • DoD AI governance and security requirements
  • CMMC compliance for defense contractors

Consumer Technology (FTC + State Laws)

  • FTC enforcement against deceptive AI claims
  • State privacy laws (CCPA, CPA, VCDPA)
  • Transparency and opt-out requirements

Ready to Navigate US AI Compliance?

Download our state-by-state compliance guide and get a personalized assessment of your regulatory obligations.