Global Moves on AI Safety: New Institutes, Investments, and Industry Commitments
AI latest news, key takeaways and jobs.
Welcome to Edition 1! If you have a relevant job you’d want to share, let us know.
AI Policy and Governance
Biden-Harris Administration Announces First-Ever Consortium Dedicated to AI Safety
On February 7, 2024 US Secretary of Commerce Gina Raimondo announced key members of the executive leadership team to lead the U.S. AI Safety Institute (USAISI), which will be established at the National Institute of Standards and Technology (NIST). Biden-Harris administration has announced the creation of the first-ever consortium dedicated to AI safety. The consortium will include more than 200 members from industry, academia, and government. It will work to develop guidelines for the safe use of AI, including red-teaming guidelines, capability evaluations, and risk management practices. (NIST)
Britain to invest 100 million pounds in AI research and regulation
British government is investing £90 million in nine new research hubs focused on AI in areas like healthcare, chemistry, and mathematics. Another £10 million will be used to help regulators address the risks and opportunities of AI. Britain is partnering with the United States on responsible AI. (The Economic Times)
AI developers to begin sharing safety test results with US government
US government is requiring AI developers to share safety test results and is investing in AI innovation and attracting AI experts. The White House says this is the “most significant” action taken on AI by any government. The goal is to make sure AI systems are safe before they are released to the public. (Global Development Forum)
Sam Altman Seeks Trillions of Dollars to Reshape Business of Chips and AI
OpenAI CEO Sam Altman seeks a staggering $5-$7 trillion investment to boost global chip production and accelerate AI development, aiming to overcome limitations faced by OpenAI and potentially reshape the entire semiconductor industry. This ambition faces significant hurdles due to the immense scale involved, exceeding the current size of the chip industry itself. (WSJ)
Safer skies with self-flying helicopters
Autonomous helicopters made by Rotor Technologies, a startup led by MIT PhDs, take the human out of risky commercial missions. Traditional helicopter missions can be dangerous, and autonomous flight can improve safety. Rotor Technologies is developing self-flying helicopters that can carry heavy payloads and travel long distances. These helicopters are safer because they eliminate the risk of pilot error. Rotor hopes to use these helicopters for new applications, like scientific missions. (MIT News)
Meta Will Crack Down on AI-Generated Fakes—but Leave Plenty Undetected
Meta will soon start labelling deepfake or artificial intelligence-generated images posted on its Facebook, Instagram and Threads platforms as “Imagined with AI”, to differentiate between those and human-generated content, the social media conglomerate’s president for global affairs said. The move is likely to put pressure on Meta’s peers in the social media and internet space to come up with respective tools to fight deep fakes on their respective platforms. By labelling images or content generated with the help of AI tools, especially those offered by Meta, the company hopes to give users more information about the content they are interacting with and subsequently sharing. (The Economic Times)
AI safeguards can easily be broken, UK Safety Institute finds
The UK’s new artificial intelligence safety body has found that the technology can deceive human users, produce biased outcomes and has inadequate safeguards against giving out harmful information. The AI Safety Institute published initial findings from its research into advanced AI systems known as large language models (LLMs), which underpin tools such as chatbots and image generators, and found a number of concerns. (The Guardian)
Top AI Companies Join Government Effort to Set Safety Standards
Top AI companies are joining a government effort to create safety standards for AI. The consortium will include industry leaders, civil society groups, and academics. They will work together to establish safety standards regarding AI, such as preventing misinformation and privacy violations. (Time)
Job Board
Superalignment Fast Grants, OpenAI, Remote, Global
Research Fellow, Global AI Workforce, Georgetown University, Center for Security and Emerging Technology, USA
Representative for the AI Safety Summit, Future of Life Institute, Paris, France
Research Engineer, Model Evaluations, Anthropic, Multiple Locations
Societal Impacts Strategy and Delivery Adviser, UK Government, Department for Science, Innovation and Technology, London, UK
Operations Leadership, Constellation, San Francisco Bay Area
Research Fellowship, Law & AI (Summer 2024), Legal Priorities Project, Remote, Global
Programme Management Officer, United Nations, New York, NY
Principal Data Scientist, Responsible AI, Microsoft, Barcelona, Spain
Frontend Engineer, Elicit, Remote, Global
Sales and Audit, Compliance, Anthropic , San Francisco Bay Area /New York, NY/ Seattle metro area / London, UK
Research Fellow, Global AI Workforce, Georgetown University, Center for Security and Emerging Technology, USA
ML Research Engineer, Machine Intelligence Research Institute, San Francisco Bay Area
Chief Operations Officer, SaferAI, Remote, Global
AI Safety Research Manager, Existential Risk Alliance, Cambridge, UK
Technical Programs Lead / Technical Programs Director, Constellation, San Francisco Bay Area
Application Security Engineer, AI Security, Amazon Stores, Amazon, Various, USA
Compliance Engineer, OpenAI, San Francisco Bay Area
Advanced Computing Allocations to Advance AI Research and Education, National Artificial Intelligence Research Resource, USA
Technical Standardization Lead, SaferAI, Remote, Global
Subscribe and please share Techplomacy Magazine with yours friends and colleagues in your network.

