White House’s OSTP Seeks Public Input on Outdated Regulations Hindering AI Innovation

On September 25, 2025, the White House’s Office of Science and Technology Policy (OSTP) issued a Request for Information (RFI) seeking public input on how existing federal regulations may hinder responsible AI innovation or deployment.  This action, part of the broader deregulatory shift under President Trump’s Executive Order 14179, underscores the Administration’s priority to streamline AI governance and eliminate outdated rules that may constrain U.S. competitiveness.

The Administration’s AI Action Plan directed OSTP to “launch a [RFI] from businesses and the public at large about current Federal regulations that hinder AI innovation or adoption, and work with relevant Federal agencies to take appropriate action.” This RFI advances that directive by focusing on identifying the regulatory and procedural barriers that unnecessarily slow safe, beneficial AI deployment.  This effort actively invites the private sector and public stakeholders to shape the next generation of federal AI policy.

Public Comments are due by October 27, 2025.

Why This Matters?

AI is rapidly transforming sectors as diverse as healthcare, finance, energy, manufacturing, and defense. But much of the federal regulatory infrastructure was built for a pre-AI era designed around human actors, static products, and traditional oversight models.

OSTP is seeking specific feedback on how these outdated frameworks may be creating bottlenecks for AI innovation. This includes regulations that assume human decision-making, require traceable documentation by individuals, mandate direct oversight that may be infeasible for adaptive or autonomous systems, or fail to account for modern data practices and continuously learning technologies.

Most U.S. regulatory frameworks were built before modern AI, relying on assumptions that systems are human-operated. These outdated assumptions affect several areas:

  • Decision-Making & Explainability: Regulations expect decisions to be traceable to a human.

  • Liability & Accountability: Legal responsibility assumes identifiable human actors.

  • Human Oversight: Presumes human intervention in key operational processes.

  • Data Practices: Data policies don’t reflect AI’s scale or dynamic use of data.

  • Testing & Certification: Standards were made for static products, not adaptive AI systems.

These gaps play out differently across sectors. OSTP notes for example, in “healthcare, regulations for medical devices, telehealth, and patient privacy were designed around human clinicians and discrete medical device updates. It may create challenges to apply the same policy framework for overseeing continuously updating AI diagnostic tools and ensuring explainable clinical recommendations.”

When applied to AI-enabled or AI-augmented systems, policy frameworks that assume human-operated systems or fail to account for technological progress hinder the development, deployment, and adoption for AI across sectors.

What OSTP Is Asking For?

The RFI outlines five core categories of regulatory friction:

  1. Regulatory Mismatches – Where human-centered rules no longer reflect how AI systems operate (e.g., mandatory human-in-the-loop supervision).

  2. Structural Incompatibility – Where legal frameworks are unfit for AI because they assume human actors or prohibit automated processes.

  3. Lack of Regulatory Clarity – Where outdated or vague rules may technically apply to AI systems but lack interpretive guidance, delaying adoption.

  4. Direct Hindrance – Where regulations explicitly restrict AI use in ways that block safe and legitimate deployment.

  5. Organizational Barriers – Where agencies have flexibility but lack the staff, expertise, or internal structure to apply it effectively.

Call to Action

OSTP is not looking for generalities; it’s requesting sector-specific examples, legal citations, and practical recommendations. Commenters are encouraged to identify specific barriers that delay or constrain AI activities, cite the relevant statutes or regulations (including CFR or U.S.C. citations where applicable), and propose remedies such as waivers, pilot programs, interpretive guidance, or legislative updates. Input on organizational or cultural barriers to AI adoption inside federal agencies is also welcome. Comments grounded in data, real-world examples can directly influence how future AI frameworks are developed.

This RFI is relevant for a wide range of stakeholders across industries and government, including:

  • AI startups and foundation model developers

  • Healthtech, diagnostics, and medtech innovators

  • Telecom and broadband providers

  • Financial services firms using AI for underwriting or fraud prevention

  • Autonomous transportation and logistics companies

  • Federal contractors and agencies deploying AI internally

  • Academia and research institutions

  • Civil society organizations advocating for equity and oversight

Providing input to the OSTP RFI is about unlocking pathways for innovation. This is a chance to push for regulatory reforms that acknowledge how AI tools function in practice. Well-crafted responses can help advance solutions like safe harbors or pilot programs that enable AI-powered diagnostics to be deployed in controlled settings.

If your operations, products, or compliance posture intersect with federal policy and AI, this RFI is your opportunity to influence how the U.S. modernizes its rulebook across key agencies like the FDA, FTC, and FCC.

Shaping AI Regulation

This is a moment to lead, not just comply.
— Legislogiq's Farhan Chughtai

At Legislogiq, we help companies move from policy observers to policy shapers. The OSTP’s RFI presents a rare opportunity for businesses to directly influence how the federal government updates outdated regulations that impact AI development, deployment, and adoption. But crafting an effective public comment isn’t just about flagging friction; it requires regulatory fluency, deep sector insight, and the ability to frame your position in a way that aligns with OSTP’s policy goals.

We support startups and enterprises alike by:

  • Identifying regulatory barriers that are impeding your AI development or deployment

  • Mapping relevant federal rules and statutes (CFR, U.S.C., agency guidance) that need to be modernized

  • Drafting persuasive RFI comments aligned with OSTP’s goals

  • Framing your AI use case as a model for responsible and innovative deployment

  • Building coalitions or co-filing with aligned partners to amplify your message

  • Positioning your comment for long-term agency engagement with OSTP, FDA, OMB, NTIA, and others

We work closely with clients across AI, broadband, fintech, healthtech, and other regulated sectors to identify specific regulatory barriers affecting their operations and draft strategically persuasive submissions. This isn’t just about cutting through red tape. It’s about making sure your voice helps write the next chapter of U.S. AI policy and AI Governance.

At LegisLogiq, we help organizations navigate the fast-moving world of A.I. regulation with clarity, creativity, and foresight. Whether you’re exploring policy compliance, advocacy, or looking to redefine your A.I. strategy, our team is here to help. From messaging guidance to partnership opportunities, contact us and someone from our team will connect with you.

Farhan Chughtai

Farhan Chughtai is a policy strategist and regulatory expert with over 12 years of experience at the intersection of technology, connectivity, and artificial intelligence. He has led federal and state government affairs initiatives across 24 states, managed multibillion-dollar broadband and AI policy portfolios, and built coalitions spanning Fortune 500 companies, startups, and government agencies.

As Head of Policy & Advocacy Legislogiq, Farhan oversees national advocacy and compliance strategy, helping emerging tech companies and organizations navigate complex regulatory environments and shape the next generation of AI governance and digital infrastructure policy.

https://www.linkedin.com/in/fchughtai/
Previous
Previous

State Spotlight — California’s SB 53

Next
Next

AI at the Crossroads of Care: What the 2025 AI Action Plan Means for Healthcare and Public Health