EA - AI Safety Newsletter #3: AI policy proposals and a new challenger approaches by Oliver Z
The Nonlinear Library: EA Forum - A podcast by The Nonlinear Fund
Categories:
Link to original articleWelcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI Safety Newsletter #3: AI policy proposals and a new challenger approaches, published by Oliver Z on April 25, 2023 on The Effective Altruism Forum.Welcome to the AI Safety Newsletter by the Center for AI Safety. We discuss developments in AI and AI safety. No technical background required.Subscribe here to receive future versions.Policy Proposals for AI SafetyCritical industries rely on the government to protect consumer safety. The FAA approves new airplane designs, the FDA tests new drugs, and the SEC and CFPB regulate risky financial instruments. Currently, there is no analogous set of regulations for AI safety.This could soon change. President Biden and other members of Congress have recently been vocal about the risks of artificial intelligence and the need for policy solutions.From guiding principles to enforceable laws. Previous work on AI policy such as the White House Blueprint for an AI Bill of Rights and the NIST AI Risk Management Framework has articulated guiding principles like interpretability, robustness, and privacy. But these recommendations are not enforceable – AI developers can simply choose to ignore them.A solution with more teeth could be on its way. Axios reports that Senator Chuck Schumer has been circulating a draft framework for AI governance among experts over the last several weeks. To help inform policy making efforts, the Department of Commerce has issued a request for comments on how to effectively regulate AI.The European Union debates narrow vs. general AI regulation. In Europe, policy conversations are centering around the EU AI Act. The Act focuses on eight “high-risk†applications of AI, including hiring, biometrics, and criminal justice. But the rise of general purpose AI systems like ChatGPT calls into question the wisdom of regulating only a handful of specific applications.An open letter signed by over 50 AI experts, including CAIS’s director, argues that the Act should also govern general purpose AI systems, holding AI developers liable for harm caused by their systems. Several members from all political blocs of the EU parliament have publicly agreed that rules are necessary for “powerful General Purpose AI systems that can be easily adapted to a multitude of purposes.â€Specific policy proposals for AI safety. With politicians promising that AI regulation is coming, the key question is which proposals they will choose to carry forward into law. Here is a brief compilation of several recent sets of policy proposals:Create an AI regulatory body. A national agency focused on AI could set and enforce standards, monitor the development of powerful new models, investigate AI failures, and publish information about how to develop AI safely.Clarify legal liability for AI harm. When ChatGPT falsely accused a law professor of sexual harassment, legal scholars argued that OpenAI should face legal liability for libel and defamatory statements made by its models. Others propose AI developers should be strictly liable for harm caused by AI, but questions remain about where to draw the line between an unsafe product versus deliberate misuse.Compute governance. AI regulations could be automatically enforced by software built into the cutting edge computer chips used to train AI systems.Nuclear command and control. Despite persistent problems with the security and reliability of AI systems, some military analysts advocate using AI in the process of launching nuclear weapons. A simple proposal: Don’t give AI influence over nuclear command and control.Fund safety research. Organizations promoting work on AI safety such as NIST and NSF could use more funding from federal sources.China proposes many AI regulations. Last week, China released its own set of AI regulations that go much further than current Western efforts. Under ...
