EUs Breton to Discuss AI Pact with Altman Amid Threats of Quitting Europe
EU's Breton to discuss AI pact with Altman, aiming to get OpenAI to comply with AI rules ahead of 2026 enforcement; practical aspects will be discussed.
EU's Breton to discuss AI pact with Altman, aiming to get OpenAI to comply with AI rules ahead of 2026 enforcement; practical aspects will be discussed.
The lyrics of a song criticize Fox News for spreading propaganda, manipulating facts and causing polarization while claiming to be fair and balanced. The song calls for unbiased journalism and accountability.
Geoffrey Hinton, the "godfather of AI," warns of the dangers of artificial intelligence getting smarter than humans and potentially taking control. He believes politicians and industry leaders need to address this issue
OpenAI CEO Sam Altman has criticised the European Union's proposed AI Act, calling it "over-regulating". The act would categorise different AI tools based on risk and force their makers to conduct
As AI systems become more powerful, a new framework for evaluating extreme risks from general-purpose models that have strong skills in manipulation, deception, cyber-offense, or other dangerous capabilities is needed to identify novel threats. Model evaluation is not a panacea and must be combined with other risk assessment tools and wider dedication to safety across industry, government, and civil society.
Click the box below to confirm you're not a robot and ensure your browser supports JavaScript and cookies. Review our Terms of Service and Cookie Policy for more info. Contact support with reference ID for inquiries.
The recent Senate hearing on AI saw industry reps agreeing to regulate new AI technologies, but some experts fear this could lead to industry capture and harm smaller firms. Critics stress the potential threat to competition and argue that regulation favours incumbents and can stifle innovation. Some suggest licensing could be effective, but others are suspicious of it becoming a superficial checkbox exercise. The focus on future harms has become a common rhetorical sleight of hand among AI industry figures, while the EU's forthcoming AI Act has clear prohibitions of known and current harmful AI use cases.
Israel's Defence Ministry aims to become an artificial intelligence superpower, with a dedicated organisation for military robotics and a high budget for research and development. The advancements in AI could lead to autonomous warfare and streamlined
AI systems may exceed expert skill level in most domains within 10 years, requiring special treatment and coordination to manage risks and ensure safety. Public oversight is crucial for governance of the most powerful systems. OpenAI believes in the potential benefits of AI but also recognizes the need to mitigate risks.
Generative AI systems offer unprecedented opportunities for progress, but also come with risks such as fake content and bias. IBM urges ethics and responsibility to be at the forefront of AI agendas, with smart regulation and transparency around data privacy. A blanket pause on training is not the solution.
G7 leaders call for technical standards to ensure trustworthy AI and align rules with shared democratic values. They recognize varying approaches, but urge development of risk-based AI rules and creation of a ministerial forum to
OpenAI warned of the dangers of its large-language-model text generator, GPT-2, and withheld it from public release due to concerns about malicious use. However, in March 2021,
Apple reportedly restricts the use of ChatGPT and other AI tools by employees due to concerns about confidential data leaks. The company also advises against using Microsoft-owned GitHub's Copilot. OpenAI recently
Class of '09, a new FX thriller streaming on Hulu from May 10, follows the careers of two FBI agents over three different timelines. The show explores the ethical implications of using AI to prevent