Q3 Regulatory Update

JULY 2025 - SEPTEMBER 2025

INTRODUCTION

Lawmakers and regulators across the Executive Branch, Capitol Hill, Federal Regulatory Agencies and State legislatures have moved from signaling to execution. Q3 is defined by a far more assertive federal strategy: The White House’s Office of Science and Technology Policy (OSTP) release of America’s AI Action Plan, presidential executive orders (EOs), committee hearings, and state experiments has now materialized into full-spectrum regulatory posture.

Six months ago, AI governance felt like a race to define the rules. Today, it’s a full-contact scramble to deploy them. The U.S. federal government is no longer simply reacting to AI’s acceleration; it’s asserting itself as a strategic actor, setting the tone for how AI innovation will be governed, financed, tested, and scaled.

This is shaping up to be the breakpoint year in AI regulation: bold new executive initiatives, intensified and contested legislative activity, accelerating state‑level experimentation, and more proactive agency enforcement.

At the same time, global governance frameworks are hardening into precedent. The European Union formally adopted the EU AI Act, marking the world’s first comprehensive horizontal framework for AI regulation.


Q3 2025 AT A GLANCE

EXECUTIVE BRANCH

OSTP’s America’s AI Action Plan codifies priorities like regulatory sandboxes, NIST‑run evaluations, streamlined NEPA pathways for data centers, and tighter export‑control enforcement; the President’s AI‑related EOs now total three for 2025. We’ll unpack what these mean for builders, providers, and deployers.

CAPITOL HILL

The Senate teed up hearings on the Action Plan and fresh bills, including the proposed SANDBOX Act to waive or tailor rules for AI pilots, alongside a revived push for a federal moratorium on state/local AI rules; both signaling a preemption and experimentation debate that sharpened after Q2.

FEDERAL REGULATORY AGENCIES

Agencies leaned further into AI: the Federal Trade Commission advanced rules and enforcement around impersonation/deepfakes; the Federal Communications Commission confirmed AI‑generated voice calls fall under the TCPA and continued related proceedings. We’ll track how these threads intersect with model deployment and telecom compliance.

STATE LEGISLATIVE

Momentum broadened: all 50 states plus D.C. and territories introduced AI legislation in 2025, and over three dozen states have enacted around 100 measures. We’ll detail the most operationally relevant provisions later in this report.

Our Quarterly Regulatory Updates offer industry leaders a clear, structured view into this evolving landscape; tracking major executive branch directives, agency compliance mandates, emerging federal legislation, and bold state-level actions. As the regulatory map continues to fragment and reform simultaneously, staying informed, engaged, and strategic isn’t just important; it’s essential to staying competitive.

EXECUTIVE BRANCH

FROM POLICY TO PLAYBOOK: THE WHITE HOUSE ROLLS OUT US LEADERSHIP OF AI

WHITE HOUSE AI ACTION PLAN

On July 23, 2025, the White House Office of Science and Technology Policy (OSTP) unveiled the long-anticipated America’s AI Action Plan, laying out over 90 federal actions to steer the nation’s approach to artificial intelligence. This move came after the failure of a Senate-backed legislative proposal that would have imposed a 10-year moratorium on certain high-risk AI applications. In the absence of sweeping congressional action, OSTP has stepped in with a bold, executive-led blueprint that touches nearly every corner of the federal AI landscape.

The AI Action Plan is structured around three strategic pillars: Innovation, Infrastructure, and International Security and signals a shift in how the federal government plans to coordinate AI development, adoption, and risk mitigation.

  • Innovation: The Plan calls for aggressive AI adoption across priority sectors like healthcare, agriculture, energy, and defense. It champions open-source and open-weight AI models, opposes overregulation, and ties responsible deployment to core American values such as free speech and nonpartisanship in algorithmic systems. Workforce upskilling and rapid retraining initiatives are also central to this innovation agenda.

  • Infrastructure: Recognizing AI’s massive energy and compute demands, the plan proposes permitting reform for data centers and energy infrastructure, expanded semiconductor manufacturing (via the CHIPS Act), and secure facilities tailored for intelligence and defense applications. It also calls for modernizing the U.S. electric grid and embedding cybersecurity throughout the AI stack.

  • International Security & Diplomacy: The Plan positions the U.S. to export its AI standards, infrastructure, and values globally. It prioritizes export controls, supply chain resilience, and engagement in multilateral bodies to counter adversarial influence. Security assessments for frontier models and investments in AI biosecurity are among the most consequential proposals in this pillar.

Read our analysis on the 2025 White House AI Action Plan here.

Why It Matters: While the Action Plan delivers long-awaited clarity on the federal government’s priorities, it still operates within the bounds of executive authority. Without binding legislation, many key issues like algorithmic fairness, civil rights protections, and transparency mandates will remain the domain of regulatory agencies and state governments.

OSTP REQUEST FOR INFORMATION

The Administration’s AI Action Plan directed OSTP to “launch a [RFI] from businesses and the public at large about current Federal regulations that hinder AI innovation or adoption, and work with relevant Federal agencies to take appropriate action.” This RFI advances that directive by focusing on identifying the regulatory and procedural barriers that unnecessarily slow safe, beneficial AI deployment. This effort actively invites the private sector and public stakeholders to shape the next generation of federal AI policy. Public Comments are due by October 27, 2025.

Read our analysis on the OSTP RFI here.

Why It Matters: AI is rapidly transforming sectors, but much of the federal regulatory infrastructure was built for a pre-AI era designed around human actors, static products, and traditional oversight models. OSTP is seeking specific feedback on how these outdated frameworks may be creating bottlenecks for AI innovation. This includes regulations that assume human decision-making, require traceable documentation by individuals, mandate direct oversight that may be infeasible for adaptive or autonomous systems, or fail to account for modern data practices and continuously learning technologies.

EXECUTIVE ORDERS

In a defining moment for the Administration’s AI policy agenda, President Trump signed three new Executive Orders (EOs) that form the operational backbone of the White House’s broader AI Action Plan, released in July. Announced during his first major AI address since returning to office, delivered at the Winning the AI Race Summit co-hosted by the All-In Podcast and Hill & Valley Forum; the new orders signal a powerful escalation from high-level vision to centralized execution.

Together, the EOs reflect the Administration’s threefold approach to artificial intelligence: Global competitiveness, domestic buildout, and ideological oversight.

Read our analysis on all three AI focused EOs here.

EO 14320: PROMOTING THE EXPORT OF THE AMERICAN AI TECHNOLOGY STACK

This EO directs the Department of Commerce and OSTP to launch an American AI Exports Program, a framework to support the international deployment of end-to-end U.S.-developed AI systems (hardware, models, software, cybersecurity, and vertical applications). AI exports are now treated as strategic assets intended to advance U.S. influence in global markets while embedding American standards abroad.

Why it Matters: Opens new channels for U.S. AI vendors to secure federal backing for international expansion, especially in allied and emerging markets.

EO 14319: PREVENTING WOKE AI IN THE FEDERAL GOVERNMENT

This EO directive imposes “Unbiased AI Principles” on all federally procured AI systems, prohibiting what the Administration describes as DEI-driven or ideologically biased outputs. It requires agencies to only adopt models that reflect new federal standards of neutrality, with agency heads accountable for enforcement.

Why it Matters: Introduces ideological compliance into federal procurement. For AI vendors, success in public-sector markets may hinge on alignment with this Administration’s neutrality criteria; impacting training data, fine-tuning, and documentation.

EO 14318: ACCELERATING PERMITTING FOR AI INFRASTRUCTURE

Aimed at overcoming energy and siting bottlenecks, this EO mandates streamlined permitting for AI data centers and supporting power and broadband infrastructure. Federal lands and agencies; including Commerce, Interior, Energy, and Defense are tasked with prioritizing and expediting reviews of “qualified” data center projects before the end of 2025.

Why it Matters: Treats AI compute as critical infrastructure. Infrastructure developers, telecom providers, and hyperscalers may benefit from faster project approvals, particularly on federal land.

Legislogiq anticipates that the fourth quarter will mark a shift from concept to consolidation, as congressional committees begin refining overlapping proposals into more cohesive frameworks. With mounting pressure from the AI Action Plan and growing agency activity, we expect legislative focus to turn toward harmonizing federal standards, clarifying liability regimes, and establishing structured mechanisms, like regulatory sandboxes and public–private institutes, that balance innovation with accountability.


CAPITOL HILL

NAVIGATING THE DUAL MANDATE: BALANCING AI INNOVATION AND GUARDRAILS

Momentum on Capitol Hill continued to build in Q3 2025 as lawmakers in both chambers introduced a wave of AI-related legislation aimed at establishing guardrails, promoting innovation, and clarifying agency roles. While no comprehensive AI framework has yet advanced beyond committee, the steady introduction of sector-specific bills, from the AI Accountability and Personal Data Protection Act to the Federal AI Regulatory Sandbox Act, reflects a Congress still in the “definition phase” of AI governance.

The legislative environment remains highly fragmented, with multiple committees asserting jurisdiction and competing visions emerging around liability, safety, innovation incentives, and federal-state preemption. This quarter underscored both the bipartisan recognition of AI’s transformative potential and the structural challenges Congress faces in translating that urgency into coherent, harmonized legislation.

U.S. HOUSE OF REPRESENTATIVES

  • Referred to the House Committee on Financial Service: Establishes “AI Innovation Labs” at federal financial regulators so supervised firms can pilot AI tools under structured safeguards.

  • Referred to the House Committee on Energy & Commerce: Directs HHS to run a grant program to support research on uses of generative AI in health care.

  • Referred to the House Committee on Science, Space, and Technology: Tasks DOE (via a National Lab) to study how AI and data-center siting affect U.S. energy supply and infrastructure needs.

  • Referred to the House Committee on Oversight and Reform: Expresses the House’s view that CMS should not use AI to determine Medicare coverage decisions.

  • Referred to the House Committee on Oversight and Reform: Expresses the House’s view that CMS should not use AI to determine Medicare coverage decisions.

  • Referred to multiple House committees (Science, Judiciary, Energy & Commerce); a broad AIgovernance proposal covering oversight, standards, coordination, and liability.

U.S. SENATE

  • Referred to the Senate Committee on the Judiciary: Creates a federal tort and private right of action for misuse of individuals’ data (including for training or outputs of generative AI) without express prior consent.

  • Referred to the Senate Committee on Commerce, Science & Transportation: Requires NIST to develop a framework for detecting, removing, and reporting child pornography in datasets used to train AI systems. text goes here

  • Referred to the Senate Committee on Commerce, Science & Transportation: Directs NIST to develop voluntary guidelines and specifications for internal and external assurance of AI systems (validation/evaluation for trustworthy AI).

  • Referred to the Senate Committee on Health, Education, Labor & Pensions: Amends the Elementary and Secondary Education Act to include AI and emerging-tech standards within state academic standards.

  • Referred to the Senate Committee on Commerce, Science & Transportation: Establishes an OSTP administered federal regulatory sandbox program for AI, including waiver/variance procedures and congressional oversight.

  • Referred to the Senate Committee on the Judiciary: Applies product-liability–style standards to advanced AI products, enabling actions against developers for defects or failure to warn.

  • Referred to the Senate Committee on Commerce, Science & Transportation; directs DOE to establish an Advanced Artificial Intelligence Evaluation Program (risk evaluation) within the Department of Energy.

Why it Matters: Across both chambers, current legislative activity revealed three emerging themes shaping the federal conversation on AI regulation. First, lawmakers are emphasizing accountability and liability, seeking to define responsibility for AI-driven harms and data misuse through proposals like the AI Accountability and Personal Data Protection Act and the AI LEAD Act. Second, there is growing bipartisan support for structured innovation pathways, such as the Federal AI Regulatory Sandbox Act and House measures creating AI innovation labs within financial regulators, signaling interest in controlled experimentation over restrictive rulemaking. Third, legislators continue to grapple with jurisdictional overlap across committees, as questions of privacy, safety, intellectual property, and agency coordination blur traditional policy silos. Together, these developments point to a maturing but still fragmented federal approach; one that remains more diagnostic than prescriptive as Congress prepares for a potential comprehensive AI framework in 2026.

Legislogiq anticipates that the fourth quarter will mark a shift from concept to consolidation, as congressional committees begin refining overlapping proposals into more cohesive frameworks. With mounting pressure from the AI Action Plan and growing agency activity, we expect legislative focus to turn toward harmonizing federal standards, clarifying liability regimes, and establishing structured mechanisms, like regulatory sandboxes and public–private institutes, that balance innovation with accountability.


FEDERAL REGULATORY AGENCIES

FROM OVERSIGHT TO OPERATIONS: FEDERAL AGENCIES MOVE TO IMPLEMENT THE AI ACTION PLAN

Across these Federal Trade Commission (FTC), Federal Communications Commission (FCC), and the U.S. Food and Drug Administration (FDA) agencies in Q3 2025, we observe a pattern of “preparation and oversight” rather than sweeping regulation. The agencies appear aligned in moving cautiously: gathering evidence, engaging stakeholders, and positioning frameworks; rather than rushing new AI-specific regulations. That suggests industry should continue preparing for evolving expectations around transparency, monitoring, and cross-agency coordination. This echoes the overarching theme of the Trump Administration view of deregulation to foster AI innovation and leadership in the U.S.

The FTC is rapidly preparing itself as a pivotal player in the emerging AI regulatory landscape, using its broad enforcement authority under Section 5(a) of the FTC Act, to shape how businesses implement artificial intelligence in consumer-facing applications. The FTC maintains that the same regulatory and legal principles concerning deception and unfairness apply to modern, advanced technological products such as AI.

FEDERAL TRADE COMMISSION

The FTC’s regulatory landscape is characterized by consumer-protection oriented enforcement, information-gathering, and marketing/deception scrutiny rather than broad new AI-specific regulation. For example, the FTC issued Section 6(b) orders to several consumer-facing AI chatbots and acted against companies making misleading AI claims. These actions reflect a pattern of using the FTC’s existing statutory authorities Section 5 (unfair/deceptive acts) and Section 6(b) orders (data-collection) rather than launching massive new regimes tailored uniquely for AI.

…[W]e at the FTC are committed to giving AI the ability to develop and reach its potential. At the same time, when it comes to AI, the Commission will work to meet our mandate to ensure competitive markets and protect consumers from fraudulent and other unlawful conduct.
— FTC Commissioner Melissa Holyoak

The broader regulatory environment emphasizes how AI may exacerbate existing consumer risk vectors rather than treat AI as something new. The FTC’s AI industry page signals increasing attention, but no new formal rulemaking specifically dedicated to AI within the period. The FTC is favoring a data-gathering/information order approach rather than issuing standalone AI rules.

IMPACT OF RECENT FTC REGULATIONS

  • Section 6(b) Orders, Launches inquiry into AI Chatbots acting as companions to several consumer-facing AI chatbot firms seeking detailed data on how those systems are built, tested and monitored.

  • Final Order against Workado, LLC, Approves final order against entity which misrepresented the accuracy of its artificial intelligence content detection product.

FEDERAL COMMUNICATIONS COMMISSION

At the FCC, the AI regulatory climate reflects a dual emphasis on innovation facilitation and national security oversight rather than AI-specific product regulation. The agency has signalled an increased role in reviewing state AI regulations for potential preemption under the Communications Act of 1934, following the America’s AI Action Plan’s directive that it evaluate whether state measures interfere with federal obligations.

Effectively, if a state or local law is prohibiting the deployment of this ‘modern infrastructure’, then the FCC has authorities to step in there. It is not a sweeping set of authority that would address all forms of AI...
— FCC Chairman Brenden Carr

Simultaneously, the FCC continues its broader “delete/regulatory-streamline” initiative which fits into a theme of lowering perceived barriers to infrastructure and tech deployment; including in data-center and AIinfrastructure build-out. In the AI context, while specific rulemaking around AI systems is not yet predominant, there is clear movement toward telecommunications adjacent AI issues, such as AI-generated robocalls/texts, and structural jurisdictional positioning whereby the FCC is staking a claim in the AI domain. For industry, this means attention should focus on how communications infrastructure, state/local regulation interplay, and AI-enabled communications tools intersect with FCC jurisdiction.

The FCC is positioning itself as a gatekeeper for state-level AI regulation (preemption risk) and continuing internal deregulatory moves, though it has not yet announced specific AI-system oriented rulemakings in this quarter.

IMPACT OF RECENT FCC REGULATIONS

  • WC Docket No. 25-253, Notice of Inquiry (NOI), Build America: Eliminating Barriers to Wireline Deployments: The NOI advances the Commission’s Build America Agenda by launching an inquiry into state and local statutes, regulations, and legal requirements that prohibit or have the effect of prohibiting the provision of wireline telecommunications services in violation of section 253 of the Communications Act. Comments due November 17, 2025. Reply Comments due December 17, 2025.

U.S. FOOD AND DRUG ADMINISTRATION

The FDA continues integrating AI into existing frameworks rather than creating entirely separate AI regulations. The FDA’s approach to Software as a Medical Device (SaMD) continues to mature toward dynamic oversight, moving from one-time approvals to total lifecycle management. This transition is especially relevant for AI-enabled wearables, which continuously collect, process, and interpret patient data. For manufacturers and innovators in medical devices, the trend is toward accumulating regulatory clarity around AI-enablement, lifecycle controls, and performance monitoring ; preparing for more formal expectations in future.

The AI revolution has arrived, and we are already using these new technologies to manage health care data more efficiently and securely.
— Health and Humans Services (HHS) Secretary Robert F. Kennedy Jr

Notable actions include final guidance on Predetermined Change Control Plans (PCCPs) for AI-enabled device software functions, allowing iterative updates without separate submissions. And the agency issued a Request for Comment (RFC) on how to evaluate AI-enabled device performance in realworld settings on September 30, 2025. For manufacturers and innovators in medical devices, the trend is toward accumulating regulatory clarity around AI-enablement, lifecycle controls, and performance monitoring ; preparing for more formal expectations in future. Read our analysis on the FDA’s RFC here.

The FDA is moving forward with regulatory guidance and stakeholder engagement for AI in medical devices, reflecting a phase of refinement and surveillance rather than large-scale regulatory overhaul.

IMPACT OF RECENT FDA REGULATIONS

DA-2025-N-4203, Request for Public Comment (RFC), Measuring and Evaluating Artificial Intelligence-enabled Medical Device Performance in the Real-World: The RFC’s objective is to obtain comment from interested parties related to the current, practical approaches to measuring and evaluating the performance of AI-enabled medical devices, including strategies for identifying and managing performance drift, such as detecting changes in input and output. Comments due December 1, 2025.

FDA-2022-D-2628, Final Guidance, Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence-Enabled Device Software Functions: Final guidance provides recommendations for PCCPs tailored to AI-enabled devices. The recommendations in this guidance are intended to support iterative improvement through modifications to AI-enabled devices while continuing to provide a reasonable assurance of device safety and effectiveness.


STATE-LEVEL LEGISLATION

DIVERGENT AI RULES CREATE A REGULATORY MAZE

California has proven that we can establish regulations to protect our communities while also ensuring that the growing AI industry continues to thrive. This legislation strikes that balance.
— California Governor Gavin Newsom

State legislatures remained highly active on AI policy throughout the summer, reinforcing the decentralized nature of U.S. AI governance and highlighting the widening gap between federal deliberation and state-level experimentation. California continued to dominate the legislative landscape with the advancement of SB 53, the “Transparency in Frontier AI Act,” setting a precedent for disclosure, safety, and compute-threshold requirements on high-risk model developers. Other states including New York, Texas, and Illinois; introduced or amended bills focusing on algorithmic accountability, AI use in employment and education, and procurement transparency.

A clear pattern emerged: while early bills focused narrowly on bias or data privacy, newer measures are expanding into governance frameworks, incident-reporting, and sector-specific standards. States are also increasingly borrowing from one another’s legislative language, accelerating convergence around model risk, public disclosure, and safety reporting norms. Collectively, these developments illustrate a maturing state policy environment where innovation hubs are shaping de facto national standards in the absence of a unifying federal statute, underscoring the need for companies operating across jurisdictions to monitor and harmonize compliance strategies proactively.

MAJOR STATE LEGISLATION PASSED

  • Signed Sept. 29, 2025, this bill establishes disclosure and risk-mitigation requirements for large “frontier” AI developers, including public safety frameworks and reporting

  • Signed Sept. 28, 2025, this bill requires disclosure when generative AI is used to provide or assist with clinical information to patients.

  • Signed Jul. 7, 2025, this bill creates an “offense of digital forgery,” criminalizing AI-generated deepfakes/voice clones used to defraud or injure.

  • Signed Aug 4, 2025, this bill prohibits use of AI to provide therapy/psychotherapy or make therapeutic decisions; permits limited administrative-support uses.

  • Signed Aug 28, 2025, this bill enacts a targeted amendment to Colorado’s AI Act by delaying principal effective dates to June 30, 2026.

  • Passed in 2024, this bill imposes obligations on AI developers and deployers to mitigate algorithmic discrimination of high-risk AI systems, one of the first U.S. laws to directly address algorithmic bias.

Legislogiq anticipates that the fourth quarter will mark an inflection point in the relationship between state experimentation and federal harmonization. As states like California and Colorado move from framework adoption to early implementation planning, the White House and Congress are under intensifying pressure to reconcile the growing patchwork of AI governance regimes.


GLOBAL POLICY

EUROPE’S AI REGULATORY FRAMEWORK TAKES SHAPE: EU AI ACT

The European Union’s landmark AI Act, formally adopted in July 2025, entered its phased implementation stage this quarter; cementing Europe’s role as the world’s first region to codify a comprehensive, cross-sector AI governance regime. The Act introduces a tiered, risk-based approach that categorizes AI systems as unacceptable, high, limited, or minimal risk, imposing proportionate compliance obligations across the AI lifecycle.

AI needs competition, but AI also needs collaboration, and AI needs the confidence of the people, and has to be safe.
— EU Commission President Ursula von der Leyen

Developers of high-risk systems; those used in employment, healthcare, education, or critical infrastructure; must complete conformity assessments, maintain detailed technical documentation, and ensure mechanisms for human oversight and post-market monitoring. Meanwhile, general-purpose and foundation model providers must disclose compute usage, training-data summaries, and safety-testing methodologies to demonstrate responsible development and deployment.

KEY REGULATORY MILESTONES AND TIMELINES

  • August 2025: Publication of the final consolidated text in the EU Official Journal.

  • September 2025: Establishment of the European AI Office under the European Commission, responsible for coordination, enforcement, and cross-border oversight.

  • Late 2025: Member States begin transposing the Act’s national enforcement structures, including designated market surveillance authorities.

  • 2026–2027: Gradual enforcement begins, starting with transparency obligations for generalpurpose AI and moving toward full compliance for high-risk systems.

THE GPAI CODE OF PRACTICE

Ahead of full enforcement of the AI Act, the European Commission and the Global Partnership on Artificial Intelligence (GPAI) rolled out the GPAI Code of Practice, a voluntary framework codeveloped with major model developers including OpenAI, Anthropic, Google DeepMind, Microsoft, Meta, xAI, and Cohere. The Code serves as a transitional instrument guiding companies in risk assessment, red-teaming, and model transparency prior to the Act’s legal implementation. It emphasizes public documentation of model capabilities and limitations, dataset provenance, and system-safety testing.

The European AI Office, established in September 2025, will monitor adherence to this Code as a soft-law mechanism to prepare industry for the AI Act’s binding requirements. Together, the AI Act and GPAI Code of Practice establish the EU’s twin-track model, combining regulatory enforcement with cooperative compliance, to accelerate readiness and harmonization across jurisdictions.

INTERNATIONAL IMPLICATIONS

The EU AI Act’s entry into force has immediate consequences for U.S. AI developers operating internationally. Any U.S.-based company offering AI systems to EU users or embedding AI into devices sold in the EU, must now evaluate whether their systems qualify as “high-risk” or “generalpurpose” under European definitions. Industry observers note that Europe’s enforcement mechanisms, led by the new AI Office, may serve as a de facto global template, particularly for companies seeking cross-border interoperability of governance frameworks.

Why It Matters: The EU AI Act and the GPAI Code of Practice together signal the formalization of global AI governance norms centered on risk management, explainability, and accountability. For industry, early alignment with EU expectations, particularly around technical documentation, red-teaming, and public transparency, will not only streamline future EU compliance but also provide a head start as U.S. federal standards take shape.

For American companies, the EU AI Act represents not only a compliance requirement but also a strategic opportunity: those who align early with its documentation, risk, and governance expectations will be best positioned for the eventual U.S. federal framework


The Road Ahead: Legislogiq’s Outlook

Executive Branch: Expect follow-through on the AI Action Plan via targeted RFIs/RFCs and interagency guidance on safety evaluation, government procurement guardrails, and cross-agency data standards. Agencies will increasingly align their AI oversight playbooks with shared concepts: risk tiers, post-deployment monitoring, and disclosure.

Capitol Hill: Momentum shifts from idea-generation to consolidation. Look for committee staff to stitch together overlapping proposals on liability, transparency, and federal–state preemption into a narrower set of “marker bills.”

Federal Agencies:

  • FCC: Activity around AI-generated communications (robocalls/robotexts, political ad disclosures) and preemption posture where state rules touch communications. Watch for NPRMs/ clarifications that indirectly set operational expectations for AI-enabled telecom services and data-center build-outs.

  • FTC: Continued use of 6(b) orders, consent decrees, and advertising/deception enforcement for AI claims; especially in child-facing, health, and fintech contexts. Expect practical guidance on substantiation, disclosures, and safety testing protocols.

  • FDA: Iterative guidance on AI-enabled device lifecycle controls (PCCPs), real-world performance monitoring, and validation/assurance expectations; growing emphasis on drift detection, postmarket surveillance, and transparency in submissions.

The frameworks are on the table, but the real work now lies in coordination between federal agencies, states, and global partners to turn principles into practice. This is a strategic opening for industry to help define how responsible AI innovation is governed, deployed, and trusted.
— Legislogiq's Farhan Chughtai

States: California’s SB 53 will catalyze “copy-and-customize” bills elsewhere; Colorado’s pacing will inform implementation timelines nationally. Anticipate more deepfake/criminal-use statutes, sectoral rules (health, insurance, education), and procurement standards. Preemption will surface in multi-state coalitions, raising pressure on Congress to define a federal floor.

International: With the EU AI Act and GPAI Code of Practice setting a procedural gold standard for safety, transparency, and accountability, the US now faces mounting pressure to articulate its own coherent regulatory postural; balancing innovation-first policies with credible risk oversight.


What Should You Do Now?

  1. ENGAGE EARLY, NOT AFTER THE RULES DROP

    Participate in open comment periods at the FTC, FDA, and FCC, as well as upcoming OSTP and NIST consultations under the White House AI Action Plan. Early participation will shape compliance before formal rules are finalized.

  2. BUILD A MULTI-JURISDICTIONAL COMPLIANCE BASELINE

    Harmonize documentation, transparency, and audit practices across state, federal, and international requirements. Adopting EU-style recordkeeping will ensure you’re ahead of future U.S. mandates and interoperable with global markets.

  3. PREPARE FOR STATE ENFORCEMENT DIVERGENCE

    Develop flexible compliance playbooks for states implementing or enforcing new AI laws, particularly California, Colorado, and New York, as they roll out risk reporting, safety documentation, and developer-deployer accountability frameworks.

  4. OPERATIONALIZE RESPONSIBLE AI PRACTICES

    Implement internal AI governance frameworks that codify principles of safety, fairness, and explainability. Assign cross-functional responsibility between compliance, engineering, and legal teams for model monitoring, bias testing, and post-deployment oversight.

  5. POSITION AS A POLICY PARTNER, NOT A PASSIVE OBSERVER

    Policymakers are actively seeking credible private-sector examples of responsible innovation. Share your compliance models, pilot results, or governance frameworks through whitepapers, consortiums, and industry coalitions, especially where Legislogiq can help amplify your perspective.

Farhan Chughtai

Farhan Chughtai is a policy strategist and regulatory expert with over 12 years of experience at the intersection of technology, connectivity, and artificial intelligence. He has led federal and state government affairs initiatives across 24 states, managed multibillion-dollar broadband and AI policy portfolios, and built coalitions spanning Fortune 500 companies, startups, and government agencies.

As Head of Policy & Advocacy Legislogiq, Farhan oversees national advocacy and compliance strategy, helping emerging tech companies and organizations navigate complex regulatory environments and shape the next generation of AI governance and digital infrastructure policy.

https://www.linkedin.com/in/fchughtai/
Previous
Previous

State Spotlight — California’s SB 243

Next
Next

FDA Seeks Public Input on Evaluating AI-Enabled Medical Devices