Checking access...
Chinese authorities have restricted state-owned enterprises and government agencies from running OpenClaw AI applications on office computers, moving to address potential security risks as the agentic AI tool gained rapid adoption across the country. Government agencies and state-owned enterprises, including China's largest banks, received notices in recent days warning against installing OpenClaw software on office devices. Several agencies were instructed to notify superiors if employees had already installed the apps, triggering security checks and possible removal. Some employees and families of military personnel were banned from installing the software on office or personal devices connected to company networks. Other notices required prior approval rather than an outright ban. OpenClaw is an open-source AI agent that automates daily tasks including email processing, schedule management, and travel booking. Beijing's concern centers on the platform's requirement for unusually broad access to private data and its ability to communicate externally, which could expose government computers to attack. The restrictions contrast with enthusiasm elsewhere in China's tech sector. Shenzhen's Longgang district separately announced subsidies to support OpenClaw industry development, reflecting conflicting regional approaches to the technology. Tencent and Zhipu have also launched AI agents built on OpenClaw.
NVIDIA is developing an enterprise AI agent platform called NemoClaw, according to sources cited by WIRED and reported by The Next Web. The company has been pitching the platform to major enterprise software players including Salesforce, Cisco, Google, Adobe, and CrowdStrike ahead of CEO Jensen Huang's upcoming keynote address. NemoClaw would extend NVIDIA's dominance from AI hardware into the software layer that enterprises use to deploy AI agents. The move represents a significant strategic expansion. NVIDIA has spent years establishing itself as the essential hardware provider for AI training and inference. An enterprise agent platform would give the company a stake in how businesses actually deploy AI workers — not just the chips they run on. The timing aligns with a broader industry push toward agentic AI, where software agents perform complex, multi-step tasks autonomously. If NVIDIA can position NemoClaw as the standard platform for building and running these agents, it would create a new recurring revenue stream alongside its GPU sales. Details on pricing, availability, and technical specifications have not been disclosed publicly.
Kevin Mandia, founder of cybersecurity firm Mandiant, has raised $190 million for his new company Armadin, which builds autonomous AI agents designed to detect and respond to cybersecurity threats without human intervention. The startup develops software agents that can learn from attack patterns and respond to threats in real time, removing the bottleneck of waiting for human security analysts to triage and act on alerts. The funding round positions Armadin as one of the most well-capitalized entrants in the growing AI-for-cybersecurity space. Mandia brings significant credibility to the effort. Mandiant, which he founded in 2004, became the go-to incident response firm for major breaches and was acquired by Google in 2022 for $5.4 billion. His track record gives Armadin an immediate advantage in selling to enterprise security teams already familiar with his work. The raise comes as cybersecurity firms increasingly incorporate AI agents into their products, with the promise of reducing response times from hours to seconds. CrowdStrike, Palo Alto Networks, and others have all announced agentic security features in recent months.
A federal judge has issued an injunction blocking Perplexity's Comet browser from placing orders on Amazon through its AI shopping agents. US District Judge Maxine Chesney ruled that Amazon provided "strong evidence" that Perplexity's agentic shopping feature accessed user accounts "without authorization" from the retailer. The ruling comes after Amazon sued Perplexity in November, alleging the AI startup repeatedly ignored requests to stop letting its agents purchase products on behalf of customers. Amazon accused Perplexity of "intruding" into its marketplace and user accounts through Comet's autonomous shopping capabilities. The company had previously sent cease-and-desist letters that went unheeded. The case sets an early legal precedent for AI agents operating in e-commerce — specifically whether an AI startup can build tools that autonomously interact with another company's platform without permission. As AI agents become more capable of performing transactions, the ruling signals courts are willing to draw boundaries around unauthorized automated access to commercial platforms.
Anthropic has filed suit against the Trump administration in US District Court for the Northern District of California, challenging the government's decision to blacklist its technology across all federal agencies. The AI company says it was designated a "Supply-Chain Risk to National Security" after it refused to allow its Claude AI models to be used for autonomous lethal warfare and mass surveillance of Americans. Anthropic argues the blacklisting violates its First Amendment rights and that the Department of War had previously agreed to the same safety conditions. The White House has labeled Anthropic "radical left, woke" and is reportedly preparing additional executive orders targeting the company. WIRED reports the administration won't rule out further action. In a court filing, Anthropic told the judge it could lose billions of dollars in revenue this year and urged quick action on its request for an injunction. The company also noted that Secretary of War Pete Hegseth directed that no military contractor, supplier, or partner may conduct commercial activity with Anthropic. The case tests whether the federal government can retaliate against a private AI company for exercising safety guardrails on its own technology.
A leaked internal memo reveals that a top US Senate administrator has authorized aides to use ChatGPT, Google Gemini, and Microsoft Copilot for official Senate work, including preparing briefings for lawmakers. The authorization marks a significant institutional shift. The US Senate, one of the most security-conscious government bodies, is now formally embracing commercial AI tools for day-to-day legislative operations. Aides can use these tools to draft documents, research policy, and prepare the briefing materials that inform how senators vote. The move comes as AI adoption accelerates across the federal government, though the Senate had previously been more cautious than executive branch agencies. The leaked memo, first reported by the New York Times, suggests the practical benefits of AI tools have overcome institutional resistance. The authorization covers the three dominant commercial AI platforms — OpenAI's ChatGPT, Google's Gemini, and Microsoft's Copilot — signaling that the Senate is not picking a single vendor but allowing aides to choose the tool that fits their workflow.
Sequoia Capital partner Ravi Gupta and former Meta Chief Revenue Officer John Hegeman are raising more than $1 billion to launch a holding company that will acquire at least one existing firm and integrate new technology into its operations. The venture represents an unusual model in tech — rather than building from scratch or making passive investments, Gupta and Hegeman plan to buy established companies and upgrade them with AI and modern tech infrastructure. The holding company structure suggests they're looking at operational businesses, not startups. Gupta brings deep venture capital experience from Sequoia, one of Silicon Valley's most influential firms, while Hegeman ran Meta's massive advertising revenue machine. Together, the pair combines capital allocation expertise with hands-on experience scaling tech-driven revenue operations at the largest scale. The fundraise is being reported by Bloomberg, with sources indicating the pair is actively seeking capital commitments from institutional investors.
Anthropic has filed a lawsuit against the US government after being blacklisted from federal contracts, telling a judge the company stands to lose billions if the decision stands. The White House responded by calling the AI safety company 'radical left, woke.' The legal fight marks a sharp escalation in tensions between the Trump administration and one of the leading AI companies. Anthropic, the maker of Claude, has positioned itself as the safety-focused alternative to OpenAI, but that positioning now appears to be working against it in Washington. The lawsuit argues that excluding Anthropic from government AI procurement would cause irreparable financial harm to the company while limiting the government's access to competitive AI tools. The case could set a precedent for how the federal government selects AI vendors and whether political considerations can override technical merit in procurement decisions. Multiple sources confirmed the filing, with Bloomberg reporting the billions-at-stake framing and Ars Technica detailing the White House's characterization of the company.
Meta has acquired Moltbook, the Reddit-style simulated social network populated entirely by AI agents that went viral in recent weeks. The company will hire Moltbook creator Matt Schlicht and his business partner Ben Parr to work within Meta Superintelligence Labs. Financial terms were not disclosed. A Meta spokesperson cited the founders' "approach to connecting agents through an always-on directory" as a key draw, calling it "a novel step in a rapidly developing space." The company said it looks forward to "bringing innovative, secure agentic experiences to everyone." Moltbook gained attention as an experiment in autonomous AI interaction — a platform where AI agents could post, comment, and form communities without human participants. The concept tapped into growing interest in agentic AI systems that operate independently rather than responding to direct user prompts. The acquisition aligns with Meta's broader push into AI agents through its Superintelligence Labs division. As the agentic AI space heats up, companies are racing to build infrastructure for agent-to-agent communication and coordination. The deal is an acqui-hire, bringing Moltbook's talent and technology under Meta's roof as the company explores how social networking concepts might translate to AI-native environments.
Oracle reported fiscal Q3 revenue of $17.19 billion, up 22% year-over-year and ahead of the $16.91 billion analyst consensus. Cloud revenue led the results, climbing 44% to $8.9 billion and narrowly beating the $8.85 billion estimate. Shares of Oracle jumped more than 8% in after-hours trading following the report. The results reflect Oracle's ongoing transformation from a legacy database vendor into a cloud infrastructure competitor. The company has been aggressively expanding its cloud capacity to meet demand from enterprises deploying AI workloads, and its Oracle Cloud Infrastructure (OCI) business has emerged as a credible alternative to AWS, Azure, and Google Cloud for AI training and inference. Oracle's cloud growth rate now outpaces its larger rivals, though from a smaller base. The company's remaining performance obligations — a measure of contracted future revenue — have been a closely watched metric as investors gauge the durability of its cloud transition. The earnings beat comes at a time when enterprise cloud spending remains strong despite broader economic uncertainty, with AI-related infrastructure investment continuing to drive demand across the sector.
Advanced Machine Intelligence (AMI), the Paris-based AI startup co-founded by Turing Award winner Yann LeCun, has closed a $1.03 billion seed round at a $3.5 billion valuation. The round, led by Bezos Expeditions, Cathay Innovation, Greycroft, Hiro Capital, and HV Capital, is the largest seed investment in European startup history. AMI is building what it calls "world models" — AI systems designed to understand and interact with three-dimensional physical reality, a departure from the large language models that dominate today's AI landscape. "Generative architecture trained by self-supervised learning mimic intelligence; they don't genuinely understand the world," said CEO Alexandre LeBrun. The company argues that industries like manufacturing, healthcare, and robotics need AI that grasps continuous, noisy, high-dimensional reality beyond tokenized text. AMI has announced its first partnership with Nabla, a healthcare AI startup. The raise follows a similar $1 billion round by Fei-Fei Li's World Labs, signaling a broader investor shift toward physical-world AI applications. The funding underscores Europe's growing ambition to compete in frontier AI research, with France emerging as a hub for next-generation AI companies challenging U.S. dominance.
The U.S. State Department has migrated its internal chatbot from Anthropic's Claude Sonnet 4.5 to OpenAI's GPT-4.1, following a Trump administration directive to cancel federal contracts with Anthropic. The switch, first reported by Nextgov/FCW through internal documents, marks a significant shift in government AI procurement. The move comes amid a broader DOD supply chain risk designation targeting Anthropic, which Microsoft has challenged by filing an amicus brief advocating for a temporary restraining order to block the designation. Microsoft's intervention signals that major tech companies view the government's approach to AI vendor restrictions as potentially harmful to competition and national security readiness. The company's brief argues for judicial intervention to prevent the DOD designation from taking effect. The Anthropic situation reflects the growing entanglement of AI policy with political considerations. Federal agencies that adopted Anthropic's models now face pressure to switch providers regardless of technical merit or integration costs. For the broader AI industry, the episode raises questions about vendor lock-in risks when government contracts can be redirected by executive action rather than competitive evaluation.