AI Risks

Evaluating Extreme Risks in Artificial Intelligence A Framework for Responsible Development

As AI systems become more powerful, a new framework for evaluating extreme risks from general-purpose models that have strong skills in manipulation, deception, cyber-offense, or other dangerous capabilities is needed to identify novel threats. Model evaluation is not a panacea and must be combined with other risk assessment tools and wider dedication to safety across industry, government, and civil society.

Experts warn of industry capture as AI regulation is left to tech giants

The recent Senate hearing on AI saw industry reps agreeing to regulate new AI technologies, but some experts fear this could lead to industry capture and harm smaller firms. Critics stress the potential threat to competition and argue that regulation favours incumbents and can stifle innovation. Some suggest licensing could be effective, but others are suspicious of it becoming a superficial checkbox exercise. The focus on future harms has become a common rhetorical sleight of hand among AI industry figures, while the EU's forthcoming AI Act has clear prohibitions of known and current harmful AI use cases.