Australia and the United States have been close allies for over a century and have similar views about how to develop, deploy, and govern artificial intelligence.
Researchers in the two countries often collaborate in AI development. Policymakers in the two countries express similar concerns about the risks and rewards of the technology and frequently partner to reduce AI risks. Both the US and Australia view AI as a critical technology, essential to both national security and economic progress.
Finally, officials in both democracies are concerned that China, an authoritarian nation, could obtain comparative advantage in AI, posing both a national security and global stability risk.
But the two nations are gradually diverging on the issue of AI sovereignty.
Australia has significant AI expertise in fields such as computer vision, deep learning, field robotics, neural networks and machine learning. The country is home to many AI start-ups and experienced AI firms.
But it has been unable to create a globally competitive AI sector — and has not created AI models that can serve as the foundation for multiple analytical uses. Such foundation models are trained on large amounts of data — generally using self-supervision at scale — and are then adapted (e.g., fine-tuned) to a wide range of downstream tasks.
Because the country has not created such models, some argue that Australia is dependent on the US and other nations for such models, which poses a national security risk for Australia. This position is understandable but efforts to address this concern may lead to unanticipated outcomes for both Australia and the world.
Some Australians have concluded that Australia must create its own sovereign version of a foundational AI model. For example, Professor Elanor Huntington, CSIRO’s Digital, National Facilities & Collections executive director, asserts that foreign models present security and reliability risks for Australian users, stating “It may also result in tools that aren’t culturally appropriate in an Australian context or that don’t realise the benefits for our workers that we want to see.”
Fundamentally, she is arguing that if Australia is not in control of these widely used models and AI is not sovereign, Australia will be less able to control the use of AI within its borders.
Many do not want to see the bulk of AI used by Australian individuals and institutions as “made in America” or any other country, as echoed by Australian defense officials.
The Australian Government Defense Data Strategy found that AI will be critical in delivering strategic objectives and maintaining a capable, agile, and potent defence force. Hence, chief defence scientist Professor Tanya Monro, notes that “a sovereign capability in AI ensures Australia’s stake in a key 21st-century industry and underpins the Defence organisation’s future operational and training capabilities.”
Moreover, last year, a team of AI advisors and computer scientists wrote a report for the government, which noted that “the concentration of generative AI resources within a small number of large multinational and primarily US-based technology companies poses potentials risks to Australia.”
They also stressed “the US’s CHIPS Act and parallel EU measures aim to ensure ongoing onshore computational capabilities for future AI-driven industries, with a focus on infrastructure and semiconductor design and fabrication. Initiatives such as the proposed US National Artificial Intelligence Research Resource aim to shape markets and direct innovation and competition policies towards a domestic AI innovation system more closely aligned to national interests.”
The report concludes, “for smaller countries and markets like Australia, this competition could present challenges for access and capability, as well as the suitability of models for our context and needs”.
While it is understandable that some in Australia want policymakers to create sovereign generative AI, it will be expensive, difficult, and risky to adopt this strategy. To succeed in AI, a company, government or research institution must have four components:
- Data (often scraped from the web and supplemented with proprietary and public data)
- Large sums of capital
- AI expertise
- Fast computing infrastructure
Australia lacks the data economies of scale and scope. Moreover, taxpayers may be reluctant to provide the extremely large sums of capital and the very fast computing infrastructure required to produce world-class foundational AI.
For example, US government investment in AI R&D grew from US$2.4 billion in 2021 to an estimated US$3.1 billion in 2024. While we know that Chinese government investment has grown dramatically, figures are unreliable. The EU has provided roughly one billion euros in funding each year for AI capacity building since 2018.
In March 2024, the Saudi government announced that it would use some US$40 billion of its US$900 billion sovereign wealth fund, the Public Investment Fund, to invest in AI at home and abroad. The Economist reported that in 2023, Britain, France, Germany, India, Saudi Arabia and the United Arab Emirates (UAE) promised to bankroll AI to the collective tune of around US$40 billion for that year alone.
Competition among nations could benefit Australia, as it could stimulate greater innovation in areas where Australians already excel, such as in machine learning,
However, even the wealthiest countries can’t keep up with the biggest spenders — the Saudis, the Chinese and the Americans. As an example, Germany plans to almost double its public funding for AI research to close to a billion euros over the next two years, as it attempts to close a skills gap with sector leaders — China and the US. But the US spent the same amount that Germany now plans to spend across two years on AI research in 2022 alone.
AI is not only extraordinarily expensive; it is highly competitive, as revealed by online leaderboards. Several of the leading AI models are funded by governments including Falcon (UAE) and Bloom (France). Singapore is also developing a foundational model based on Southeast Asian language data.
But these governments not only fund the development of the model but also the infrastructure and compilation of the datasets. Here again, the expenses are huge. Moreover, governments may need to provide long-term support to private firms producing AI, just as they do for private firms producing steel.
Most companies make money using the ‘freemium’ model (revenue is drawn from advertisement, while the product is free to users) or the subscription model (through licensing to business and/or individual needs, making the AI model fit-for-purpose). It is unclear if either business model is sustainable. And if it is not sustainable, taxpayers may end up directly or indirectly propping up these firms.
France, Singapore, and the UAE have decided that they must have sovereign AI capacity for economic or national defense reasons or both. However, taken collectively, these decisions have important implications, risking the potential of AI overcapacity. AI could become “commodified.” Moreover, as nations seek to sustain domestic AI competitiveness and market share, some nations might “dump” excess capacity, making it easier for criminal elements or rogue agents to acquire AI.
Generative AI foundation models are seen as the most important variant to AI as noted above. But there is an opportunity cost to investing in these models, which could be supplanted by other techniques or technologies.
In summary, while it is understandable to want to create foundational AI models made in Australia for national security reasons, such a decision may have surprising and undesirable long-term implications.
Australia should think long and hard about overinvesting in the current iteration of AI. Instead, Australia could develop a data sharing strategy that encourages multi-sectoral data sharing in a trustworthy manner to unlock the potential of data to solve wicked problems. Australia could also consider beefing up the incentives for individuals to work, live and study in Australia.