New Fellowship Opportunity: Introduction to Cooperative AI

We’re excited to officially announce that we will be running the Introduction to Cooperative AI fellowship in collaboration with the Cooperative AI Foundation. Cooperative AI is an emerging research field focused on improving the cooperative intelligence of advanced AI for the benefit of all. This includes both addressing risks of advanced AI related to multi-agent interactions and realizing the potential of advanced AI to enhance human cooperation.Through this course, you’ll understand the fundamentals of Cooperative AI and its relevance for AI Safety. See the post for more details - apply before the 26th of March, 2025!

Learn More

Why AI Safety is Possibly the Most Important Work of Our Time.

In this article we follow the journey of a student new to AI safety, who enrolled in BlueDot’s Intro to Transformative AI course. Initially skeptical yet curious, he soon discovered that AI safety isn’t merely an abstract debate—it’s real, and possibly one of the most important challenges of our time.Over a period of five days of daily discussions led by Leo Hyams and Benjamin Sturgeon (founders of AI Safety Cape Town), he learned that AI is far more than just another tool. It’s a force set to shape humanity’s future—for better or for worse. And if we get it wrong, the consequences could be dire. This course wasn’t merely about learning; it is a call to action. For anyone new to AI safety or still questioning its importance, consider this your invitation to take the first step.

Learn More

The Impact of Decentralising AI on the US-China Arms Race

The post discusses the critical debate over the risks of centralizing versus decentralizing AI development, highlighting the trade-offs involved. Centralized AI can concentrate power but may allow for better accountability, while decentralized, open-source AI fosters broader innovation but risks misuse by bad actors. A recent Reuters report reveals that China has been using open-source AI models like Meta's Llama for military applications, including surveillance and intelligence. This raises concerns about the unintended consequences of open-sourcing powerful AI technologies, as they can strengthen authoritarian regimes and exacerbate geopolitical tensions.

Learn More

November 2024 Newsletter

This edition features a recap of our co-founders' participation in the ARENA program, where they gained insights on AI Safety, coding fundamentals, and worked on vision interpretability. The experience highlighted the need for organizational capacity in the AI safety field. In our reading group, we discussed "GSM-Symbolic," analyzing LLMs' reasoning limitations, showing LLMs' struggles with irrelevant information but hinting at potential improvements with scale. We’re also expanding our team for community events, with upcoming gatherings on November.

Learn More

August 2024 Newsletter

Leo Hyams, co-founder of AI Safety Cape Town (AISCT), introduces the organization, which focuses on ensuring AI systems benefit humanity. AISCT believes South Africa has untapped talent for AI safety and has built a growing community, a research lab, and international partnerships. AISCT has participated in events like Deep Learning IndabaX 2024 and successfully ran a fellowship with AI Safety Sweden. Their current research evaluates large language models for traits that may harm human agency, and they host biweekly AI safety reading groups. Upcoming plans include a fellowship with the European Network for AI Safety and a Cape Town retreat for young professionals exploring AI safety careers. Readers are invited to engage with the organization and stay updated.

Learn More