Just posted to my Forbes column about how China is rethinking the laws of robotics—and what that says about the future of AI and humanoid robots globally.
The 2004 movie I, Robot popularized Isaac Asimov’s three elegant laws of robotics: don’t harm humans, obey humans, and protect yourself in that order. China, however, is moving in a very different direction. Instead of three high-level principles, it’s proposing a detailed, state-centric framework of 24 rules governing AI systems and robots, from content controls and emotional manipulation to audits, algorithm registration, and mandatory human takeover in crisis scenarios. It’s less a philosophy of robotics and more a full-stack regulatory architecture.
What’s striking is how much overlap there is with emerging U.S. and European thinking on safety, transparency, and human control—and how sharply things diverge on ideology and state oversight. Preventing harm, protecting children, disclosing AI identity, and keeping humans in the loop appear to be becoming global norms. But China goes much further, requiring alignment with state-approved values, government-supervised testing environments, and extensive compliance mechanisms that will add cost and complexity to building intelligent systems.
As humanoid robots inch closer to everyday deployment, these rules hint at a future where “robot laws” are no longer abstract thought experiments but enforceable policy. The big open questions are how other countries respond, how much global alignment is possible, and what happens when robots are built not for homes or hospitals—but for war.