Australia must do more to shape its artificial intelligence future. The release of DeepSeek is a stark reminder that if Australia does not invest in its own AI solutions, it will remain reliant on foreign technology—technology that may not align with its values and often carries the imprints of its country of origin.
This reliance means that Australian user data and the economic benefits derived from it will continue to flow offshore, subject to foreign legal jurisdictions and foreign corporate priorities.
When people engage with AI chatbot assistant-type services from platforms such as ChatGPT, Gemini, Copilot or DeepSeek—via web interfaces, mobile apps, or application programming interfaces (or APIs)—they are sharing their data with these services as well as receiving AI-generated responses. The market entry of DeepSeek, which stores its data in China and moderates its responses to align with Chinese Communist Party narratives, raises two critical concerns: the exploitation of data for foreign interests and the ability of AI-generated content to shape public discourse.
AI platforms not based in Australia operate under the legal frameworks of their home countries. In the case of DeepSeek, this means compliance with China’s national intelligence laws, which require firms to provide data to the government on request. User inputs including text, audio and uploaded files, and user information such as registration details, unique device identifiers, IP address and even behavioural inputs like keystroke patterns, could be accessed by Chinese authorities. The flow of Australian data into China’s data ecosystem poses a long-term risk that should not be overlooked.
While individual data points may seem insignificant on their own, in aggregate they provide valuable insights that could be leveraged in ways contrary to Australian interests. As a 2024 ASPI report found, the CCP seeks to harvest user data from globally popular Chinese apps, games and online platforms, to ‘gauge the pulse of public opinion’, gain insight into societal trends and preferences, and thereby improve its propaganda.
This may be even more powerful for chatbots, which can collect data for aggregation to understand audience sentiment in particular countries, and also be used as a tool for influence in those countries. AI models are shaped by the priorities of their developers, the datasets they are trained on, and the fine-tuning processes that refine their outputs. This means AI does not just provide information, it can be trained to reinforce particular narratives while omitting others.
Many chatbots include a safety layer to filter harmful content such as instructions for making drugs or weapons. In the case of DeepSeek, this moderation extends to political censorship. The model refuses to discuss politically sensitive topics such as the 1989 Tiananmen Square protests and aligns with official CCP positions on topics such as Taiwan and territorial disputes in the South China Sea. AI-generated narratives influence public perception, which can pose risks to the democratic process and social cohesion, especially as these tools become more commonly embedded in search engines, education and customer service.
Australia’s response should be about having the right safeguards in place to mitigate known risks. It needs to ensure that AI systems used in the country reflect its values, security interests, and regulatory standards. This challenge demands that Australia play an active role in AI development and implement regulatory frameworks that protect against harms and foster domestic innovation.
DeepSeek challenges the idea that only tech giants with massive resources can develop competitive AI models. With a team of just 300, DeepSeek reportedly developed its model for less than US$6 million, far less than the $40 million training cost of OpenAI’s GPT-4, or the $22 million cost for training Mistral’s Mistral Large. While some experts argue this figure may not reflect the full cost—including potential access to restricted advanced processors before US export controls took effect—the broader lesson is clear: significant AI advances are possible without vast financial backing.
DeepSeek has proven that having talent is even more important than having tech giants, which highlights an opportunity for Australia to participate meaningfully in AI development.
To harness its potential, Australia must foster an environment that nurtures homegrown talent and innovation. The announcement last week of the $32 million investment by Australian AI healthtech firm Harrison.ai by the National Reconstruction Fund is a step in the right direction, but investment in a single company is not enough.
Australia needs increased investment in education and research, strengthening existing developer communities—particularly open-source initiatives—supporting commercialisation efforts, and promoting success stories to build momentum. A well-supported AI sector would allow Australia to harness the benefits of AI without attempting to match the spending power of global tech giants. The focus should be on fostering an environment where AI talent can thrive and ethical AI can flourish, ensuring that Australia reaps both the economic and societal benefits.
Without strategic investment in domestic AI capabilities, Australia risks ceding influence over critical technologies that will shape its economy, security and society in the years ahead. The challenge is not just technological—it is strategic. Without decisive action, Australia will remain a passive consumer of AI technologies shaped by foreign priorities and foreign commercial interests, with long-term consequences for democratic integrity, economic security and public trust in AI-driven systems.
Meeting this challenge requires more than just regulatory safeguards; it demands sustained support for a strong domestic tech ecosystem.