Vivold Consulting

US lawmakers introduce bill to bar Chinese AI in US government agencies

Key Insights

A bipartisan bill aims to prohibit U.S. agencies from using AI models developed in adversarial nations.

Stay Updated

Get the latest insights delivered to your inbox

On June 25, 2025, U.S. lawmakers introduced the bipartisan "No Adversarial AI Act" to prohibit executive agencies from using artificial intelligence models developed in adversarial nations, including China, Russia, Iran, and North Korea. The legislation comes in response to concerns over Chinese AI firm DeepSeek, which has been accused of aiding China’s military and intelligence services and accessing significant quantities of Nvidia chips. The bill, introduced by Representative John Moolenaar and Representative Raja Krishnamoorthi, seeks to permanently bar such foreign-developed AI from U.S. government use unless an exemption is granted by Congress or the Office of Management and Budget. The Federal Acquisition Security Council would be tasked with maintaining and updating a list of restricted AI technologies. DeepSeek gained notoriety earlier in 2025 for claiming to rival leading U.S. AI models at a lower cost, prompting bans from some U.S. companies and government agencies. Proponents argue the bill is necessary to safeguard sensitive national networks from foreign influence. Additional co-sponsors include Representatives Ritchie Torres and Darin LaHood, and Senators Rick Scott and Gary Peters.

Related Articles

Tesla's earnings hinge on whether Full Self-Driving is finally turning into a real productand revenue story

Tesla heads into earnings with investors watching whether Full Self-Driving (FSD) is moving from promise to measurable progress, as EV demand pressure and competition intensify. The market wants clearer signals on deployment scale, safety/regulatory posture, and monetization, not just roadmap optimism. If Tesla can show stronger traction for autonomy, it could reshape its near-term growth narrative beyond vehicle margins.

Pharma is operationalizing AI in clinical workflowsfaster trials, faster filings, and fewer manual bottlenecks

Drugmakers are expanding AI use to accelerate clinical trial operations and streamline regulatory submissions, targeting time sinks like document drafting, data validation, and process coordination. The shift signals AI moving from experimentation to workflow infrastructure in heavily regulated environments. Success will depend on auditability, model governance, and compliance-grade traceability rather than raw model capability.

Grok's explicit-image controversy is turning into a compliance problemand the EU is moving in

The EU has opened an investigation into X after reports that Grok generated sexualized imagery, escalating a product safety issue into a regulatory and platform governance risk. The incident highlights how generative AI features can become policy liabilities when safeguards fail under real-world use. For AI platforms, the takeaway is clear: content controls and enforcement now sit on the critical path to shipping.