The Impact of Decentralising AI on the US-China Arms Race

One of the key debates in the field of AI safety is the relative risks of centralizing versus decentralizing AI development. Both approaches present significant risks, and navigating this issue is very much a game of trade-offs. For example, privatized and centralized AI may concentrate power in the hands of a few, which can be dangerous in itself, but could also better enable the implementation of accountability mechanisms. Conversely, open-source and decentralized AI might empower broader innovation and access, but it also increases the likelihood of misuse by ungovernable bad actors.

This dynamic is both unresolved and intensifying. For example, Reuters recently reviewed a collection of Chinese AI research papers revealing that China has been leveraging open-source large language models (LLMs) developed in the West for military purposes (Reuters, 2024). Specifically, these documents provide evidence of the systematic research, deployment, and fine-tuning of Meta’s Llama models to enhance domestic policing, mass surveillance, and military intelligence.

This development is particularly ironic given that one of the key justifications for rapid investment in AI development in the U.S. has been the "arms race" narrative between the U.S. and China. Companies focused on advancing AI capabilities often claim that staying ahead of adversaries like China is critical. Yet, by open-sourcing tools like Llama, Western developers are inadvertently providing adversaries with unrestricted access to these powerful models. While the application of these models for military purposes violates Meta's terms of use, enforcing such restrictions is virtually impossible given the models' open-access nature.

The Reuters report describes how Chinese military scientists have utilized these models to enhance military intelligence and automate decision-making processes. Moreover, these models have been explored for domestic policing applications, including analyzing vast datasets to increase surveillance efficiency. China is already known for its use of AI in mass surveillance, particularly to suppress political dissent and silence opposition. Open-sourcing these technologies appears to have inadvertently strengthened their ability to do so, directly enabling them to bolster authoritarian practices and undermine free speech within their borders.

This alarming side effect of open-sourcing AI highlights the critical need for a more robust governance framework around AI development and deployment. Open-sourcing these tools with little regard for their potential misuse demonstrates a lack of foresight on the part of some developers. The consequences of such actions are profound, potentially escalating geopolitical tensions and enabling human rights abuses.