Written By

Artificial intelligence systems and data centres have increasingly become an integral part of modern day society. A KPMG survey focused on AI use found that 66% of respondents use AI for work and personal reasons, of which 38% of respondents claim to use AI on a daily or weekly basis and 28% use it semi-regularly. According to these results, a majority of the respondents rely on AI to carry out day to day functions whether it be for work, study, or personal reasons. Moreover, the reliance on AI has been extended to governments, global financial systems, and states, as these entities rely on AI systems to improve efficiency and speed of services provided. This shows how AI has become integrated into the fabric of global society.

 

Now imagine one day all AI systems and programs cease to function. While the chances of such an event happening are low, it is not impossible and the consequences of being overly reliant on AI systems can be devastating. The consequences of a global AI shutdown will impact the global economy as well as global geopolitics, which could lead to trillions disappearing from the stock market and national security disasters across the globe.

Why Can AI Systems Fail?

AI systems are increasingly becoming interconnected globally through cross-border data flows, which is necessary for governments and businesses to collaborate internationally and “…enhance global connectivity”. The interconnection of global AI systems can be beneficial in allowing countries to collaborate across sectors such as healthcare, finance, and sustainability. AI is a technological marvel and can be integrated into every facet of society, however, this technology is not infallible.

 

A reason an AI system could fail is improper data handling, which could lead to faulty outputs by AI systems and learning machines. According to Rami Al Naib (Head of Data and AI at Univio) and Pawel Koziolek (Marketing Partner at Univio) “…85% of AI projects fail…with poor data quality being the main culprit… and using flawed, incomplete, or biased datasets leads to unreliable outputs”. Using faulty or incomplete data can lead to AI malfunctioning, which in turn can lead to global disruptions.

 

 

Moreover, AI systems could fail due to what can be described as AI accidents. These AI accidents can occur due to sudden or unfamiliar inputs into AI systems, which can be detrimental to the way this system functions. These types of accidents can stem from failures relating to the robustness, specification, and assurance of AI systems, and when combined with the integration of AI into vital systems can contribute to the increase of risk factors that can cause AI to fail such as competitive pressure, system complexity, untrained operators, and failure as a result of multiple use.

What's the Worst that Could Happen?

An AI system failure or shutdown can result in devastating consequences for numerous institutions that rely on AI to execute essential functions. These institutions use AI to eliminate redundancies and increase efficiency, however; it is possible they do not have the adequate means to mitigate the impact of a devastating shutdown. Therefore, a global AI system failure can generate shocks across a range of institutions, particularly institutions in the banking and finance sector as well as Big Tech.

 

An AI system shutdown could have implications for the global economy, considering numerous economies and industries are moving toward digitization, which relies on AI systems to carry out vital functions efficiently. If the AI systems that carry out essential functions were to fail, this would cause massive disruptions across a number of economic sectors, which in turn could destabilize modern economies reliant on AI technologies. For example, Banks and financial institutions use AI in a variety of ways which contributes to the efficient provision of services offered. These institutions use AI in numerous ways such as fraud detection, customer interaction, as well as wealth management and payments. In the case of an AI system failure, banks and their clients could become vulnerable to fraudulent activity, while also leaving clients unable to make essential transactions, which could lead to the collapse of financial sectors.

 

Beyond banking and finance, AI programs have been used by Big Tech companies such as Amazon, Microsoft, META, and Apple to either automate their services or to provide an enhanced user experience. This is primarily done through cloud services, enhanced algorithms to cater to users’ needs and content moderation, and the automation of supply chain management (in the case of Amazon). The integration of AI in such products has resulted in these companies investing heavily into AI and automation. So far, Big Tech companies have spent more $155 billion on AI and automation investments by August 2025, which is more than the U.S. government spent on education and social services and that number is expected to grow in 2026. Lisa Eadicicco of CNN reports “Google expects to spend $91 to $93 billion in capital expenditures for 2025, up from the $85 billion… Microsoft sees spending up 74%, to $34.9 billion this year, largely lining up with the more than $30 billion it predicted for the quarter. Meta spent $19.37 billion, up from $9.2 billion a year ago and more than the $18.4 billion analysts expected”.

 

 

This indicates that AI and automation are an integral part of Big Tech’s investment portfolio, which shows no sign of slowing down anytime soon. Disruptions from an AI shutdown could contribute to economic decline, as over-investment in AI and automation by Big Tech companies in combination with an AI malfunction could result in a stock market collapse. This could happen as the concentration of the AI market by Big Tech combined with AI failing to live up to its hype, could result in a similar bubble burst to the dotcom and housing bubble pop. Already, tech companies (especially AI companies) are losing billions due to the lack of viability of AI stocks. This is clear as OpenAI has already lost $5 billion while Elon Musk’s xAI has lost $13 billion, which could be a sign that the AI bubble may pop in the future. An AI bubble bursting could result in trillions of dollars being wiped out from the stock market.

 

Furthermore, there are also geopolitical implications that may arise from an AI system failure. AI is becoming more integrated into the realm of geopolitics, specifically in the areas of national security and conflict. An example of AI being used in current conflict and national defence, is the case of Ukraine, where AI systems are used for defence against Russian attacks, while also using AI as a form of intelligence gathering and strategic decision-making. While AI systems can be beneficial for national defence and military strategy in conflict, a system failure or shutdown can have devastating consequences. This was prevalent in the Gaza War where Israeli AI systems malfunctioned when carrying out operations causing an increase in civilian fatalities. While there is a possibility that an AI could shut down and have consequences for the state of conflict, there is an alternative focus on what could happen if AI were to go rogue during conflict. This can occur through not a shutdown but through goal drift or power-seeking, which could cause the AI to act independently of its operator, which could have devastating effects in a conflict. Overall, besides the alternative mentioned, it can be argued that an AI malfunction or shutdown can result in the disruption of defence operations and conflict, which can result in civilian casualties.

 

While there is a possibility AI systems could fail or shut down in the future, there are ways in which society can prevent the potential consequences globally. One way to deal with the potential crisis of an AI shutdown would be to develop a system of AI governance. According to George Gor and Niki Iliadis of the Future Society, this framework could include legal oversight, cross-border cooperation, operational protocols, and monitoring developments. By developing a strong system of governance, humans can “…direct AI research, development and application to help ensure safety, fairness and respect for human rights”. The most important factor in keeping AI grounded is to guarantee that human input continues to be part of the way AI systems function. Only by maintaining a human input in the operation of AI systems can society truly prevent a potential global outage, which can prove to have a devastating effect on the way societies function.

References

Al Naib, Rami , and Pawel Koziolek. “AI Failures: Learning from Common Mistakes and Ethical Risks.” Univio.com, 2024. https://www.univio.com/blog/the-complex-world-of-ai-failures-when-artificial-intelligence-goes-terribly-wrong/

 

Arnold, Zachary, and Helen Toner. “AI Accidents: An Emerging Threat What Could Happen and What to Do CSET Policy Brief.” Center for Security and Emerging Technology, July 2021. https://cset.georgetown.edu/wp-content/uploads/CSET-AI-Accidents-An-Emerging-Threat.pdf

 

Bratton, Laura. “Big Tech Has to ‘Walk the Line’ with AI Spending This Earnings Season.” Yahoo Finance, October 29, 2025. https://finance.yahoo.com/news/big-tech-has-to-walk-the-line-with-ai-spending-this-earnings-season-151904142.html

 

Chlouverakis, Kostis. “How Artificial Intelligence Is Reshaping the Financial Services Industry.” EY, April 26, 2024. https://www.ey.com/en_gr/insights/financial-services/how-artificial-intelligence-is-reshaping-the-financial-services-industry

 

Cohan, Peter. “AI Bubble May Burst — Wiping out $40 Trillion from Nasdaq. Here’s What to Do.” Forbes, October 15, 2025. https://www.forbes.com/sites/petercohan/2025/10/15/ai-bubble-may-pop—wiping-out-40-trillion-learn-what-could-happen-and-what-to-do/

 

Eadicicco, Lisa. “Big Tech Keeps Splurging on AI. The Pressure Is Ramping up to Show Why.” CNN, October 31, 2025. https://edition.cnn.com/2025/10/31/tech/microsoft-amazon-meta-google-earnings-ai

 

Finio, Matthew, and Amanda Downie. “AI in Banking.” Ibm.com, May 2024. https://www.ibm.com/think/topics/ai-in-banking

 

Future for Advanced Research and Studies. “The Role of Artificial Intelligence in Modern Warfare.” Futureuae. Future for Advanced Research and Studies, May 22, 2024. https://futureuae.com/en-US/Mainpage/Item/9290/militarizing-ai-the-role-of-artificial-intelligence-in-modern-warfare

 

Gillespie, Nicole, Steve Lockey, Alexandria Macdade, Tabi Ward, and Gerard Hassed. “Trust, Attitudes and Use of Artificial Intelligence a Global Study 2025,” 2025. https://doi.org/10.26188/28822919

 

Gor, George, and Niki Iliadis. “What Is an Artificial Intelligence Crisis and What Does It Mean to Prepare for One? – the Future Society %.” The Future Society, May 26, 2025. https://thefuturesociety.org/aicrisisexplainer/

 

Hendrycks, Dan, Thomas Woodside, and Mantas Mazeika. “An Overview of Catastrophic AI Risks,” October 9, 2023. https://arxiv.org/pdf/2306.12001

 

iFOREX. “How Big Tech Companies Combined AI into Their Products in 2024.” Iforex.in, 2024. https://www.iforex.in/ai-and-trading/how-big-tech-companies-combined-ai-into-their-products

 

Jeroudi, Leith. “The Backlash against Military AI: Public Sentiment, Ethical Tensions, and the Future of Autonomous Warfare.” Trendsresearch.org, 2025 https://trendsresearch.org/insight/the-backlash-against-military-ai-public-sentiment-ethical-tensions-and-the-future-of-autonomous-warfare/?srsltid=AfmBOoqpugtrghKP9X5VBPwJK9mSFP5aY9r8nv6y5kYFIboA1BXFpjKn

 

Maguire, Adam. “Bubble Trouble: Is Too Much Being Spent on AI?” RTE.ie, November 8, 2025. urn:epic:1542794.

 

Marr, Bernard. “The Geopolitics of AI.” Forbes, September 19, 2024. https://www.forbes.com/sites/bernardmarr/2024/09/18/the-geopolitics-of-ai/

 

Marwala, Tshilidzi, Eleonore Fournier-Tombs, and Serge Stinckwich. “United Nations University Regulating Cross-Border Data Flows: Harnessing Safe Data Sharing for Global and Inclusive Artificial Intelligence,” October 2023. https://collections.unu.edu/eserv/UNU:9291/UNU-TB_3-2023_Regulating-Cross-Border-Data-Flows.pdf

 

Montgomery, Blake. “Big Tech Has Spent $155bn on AI This Year. It’s about to Spend Hundreds of Billions More.” the Guardian. The Guardian, August 2, 2025. https://www.theguardian.com/technology/2025/aug/02/big-tech-ai-spending

 

Mucci, Tim, and Cole Stryker. “AI Governance.” Ibm.com, October 10, 2024. https://www.ibm.com/think/topics/ai-governance

 

Sonnenfeld, Jeffrey A., and Stephen Henriques. “This Is How the AI Bubble Bursts.” Yale Insights, October 8, 2025. https://insights.som.yale.edu/insights/this-is-how-the-ai-bubble-bursts

Comments

Write a comment

Your email address will not be published. Required fields are marked *