At first glance, current U.S. AI policy appears to favor deregulation. Political leaders such as JD Vance and several members of Congress have promoted a hands-off approach, even considering a decade-long ban on state-level AI laws. The Trump administration’s new “AI action plan” similarly warns against preventing upcoming technologies with over excessive bureaucracy.
However, while the federal government avoids regulating consumer-facing AI tools like chatbots or image generators, it is highly interventionist when it comes to the core infrastructure behind AI. Both the Trump and Biden administrations have taken strong action on AI chips, a critical resource for advanced systems. Biden restricted their export to strategic rivals like China, and Trump pursued partnerships with countries such as the UAE.
Early AI regulations in the EU were focused on application-level risks such as bias, surveillance, and environmental harm. But a second wave, led by the U.S. and China, represented a shift toward national security priorities which aimed to preserve military advantages and to prevent misuse of AI for nuclear proliferation or disinformation. Now a third approach is emerging that combines social and security concerns. Research suggests this model is more effective because it breaks down fragmented regulatory efforts and reduces redundancy.
This challenges the idea that the U.S. is “hands-off” requires looking at the full AI stack. While Washington avoids heavy rules on consumer-facing AI tools, it tightly controls foundational elements like AI chips. This contradicts claims of deregulation and shows that U.S. policy is less an absence of regulation and more a strategic relocation of it. True global AI governance depends on acknowledging the reality that governments already regulate AI through export controls, trade policy, and national security measures.