A great initiative from AI pioneer and safety advocate Yoshua Bengio, that I had the pleasure to meet at École Polytechnique in January: the launch of LawZero, a non-profit dedicated to developing solution for “safe-by-design” AI systems.
At a time when many are lured by Big Tech’s El Dorado, it’s heartening to see a thought leader of Bengio’s stature commit to public-interest work. Law Zero aims to serve as a kind of AI compliance layer, a tool to help ensure that other AI systems behave lawfully and ethically.
Of course, the root challenge remains human behavior. Since AI learns from us, it inevitably reflects both our strengths and our flaws. As AI becomes more capable and autonomous, its ability to replicate or amplify behaviors such as deception, blackmail or manipulation, presents increasingly severe risks.
Some cynics might point to OpenAI’s own trajectory: it, too, began as a non-profit with lofty ideals, only to pivot toward commercial dominance and cutting corners on AI safety.
Bengio, however, brings credibility to the AI safety cause. With global influence, lessons learned from OpenAI, and a focus on safety, Law Zero has the potential to become a cornerstone institution.
The “LawZero” named after Asimov’s late addition to his 3 laws of robotics, stating that ‘a robot cannot cause harm to mankind or, by inaction, allow mankind to come to harm”, requires ensuring ecosystem-level safety, a far messier, multidimensional challenge. A multi-layered, system-of-systems approach will likely be required, with international cooperation in new institutions, leadership from large corporations beyond tech and citizens, and a reckoning with the societal ripple effects of these technologies.
hashtag#AI
hashtag#AISafety
hashtag#Leadership
hashtag#Innovation

Hail Law Zero!
•
Leave a Reply