Throughout the years, artificial intelligence (AI) initiatives have gone through several active and inactive cycles. The cycle when the interest and funding for the AI declines and when there is a quiet period for research and development, is characterized as AI Winter. This can happen for various reasons, such as failure of high-profile AI projects, inability of AI technologies to live up to their hype, or emergence of new technologies that divert attention away from AI. AI Winter can significantly impact the AI research community, as funding and resources become scarce, and researchers may struggle to continue their work. It is expected that after the quiet period passes, a cycle of growth and renewed interest to follow. That cycle is called AI summer. During AI summers, expectations are set about the future of AI, promises are made, and private, and public investments flow.
Some say that the abovementioned cycles strongly correlate with Gartner’s hype cycle when a new and emerging technology is explained. Each cycle begins with optimistic claims, after which funding pours in, and the progress seems to happen, but years later, the progress stops or slows, and the budget reduces.
It is also said that it is currently one of the most extended periods of sustained interest in AI. However, there are many open questions about technology development, how far society can go regarding that, and the research’s capacity to keep up with AI progress. At the same time, there is a difference or inequality among regions, countries, and organizations when deploying or scaling AI or the amount of data that is needed for AI and machine learning systems. These capabilities of organizations regarding AI have been linked to a spin-off term known as Enterprise AI Winter.
This article will give an overview of AI Winters through the years, together with some of the reasons behind them and present some similarities today that can potentially trigger an Enterprise AI Winter.
Difference between AI Winter and Enterprise AI Winter
As mentioned earlier, AI Winter is usually a term used to describe a period of reduced funding and interest in artificial intelligence research and development.
Enterprise AI Winter, on the other hand, refers to a period of reduced funding and investment in the development and deployment of artificial intelligence technologies within businesses and organizations.
Overview of AI Winters Through the Years
The hype around AI innovation started in the 1950s with machine translation. Throughout the 1960s, there was quick progress, and government funding flowed, so the period is called the Golden Years of AI. By the mid-1970s, progress stagnated because many innovations proved too narrow in their applicability. The first AI Winter came. The cycle repeated itself in the 1980s when there was a rise in interest in AI systems and what we now call neural networks. Once again, there was optimism and significant increases in funding, including private financing, as more companies began to rely on computers for their business operations. But, since the big promises were never realized, funding dried up again. The second AI Winter came.
More specifically, the timeline of the AI Winters would look like this:
1954*: The first experiments in machine translation were executed, and there was optimism regarding AI research, gaining funding from the US Defense Advanced Research Projects Agency (DARPA). The field of machine translation was significant during the Cold War, as there was an interest in automatic translation from Russian to English. But, in 1964, the Automatic Language Processing Advisory Committee concluded that there was no predictable prospect of useful machine translation, which led to a cut in funding for all academic translation projects.
1956: The Dartmouth Summer Project coined the term AI. Researchers from many fields and different papers and concepts were put forward with a straightforward task to capture the processes of human reasoning through manipulating symbolic systems (i.e., computer programs).
1957: Rosenblatt invented perceptrons, a type of neural network inspired by the work of neuroscience in the 1940s. That led him to create a simple replication of the neurons in the brain.
1969: Marvin Minsky and Seymour Papert published the book Perceptrons. It was a harsh critique of Rosenblatt’s perceptrons and pointed out the flaws and limitations of neural networks. This publication influenced DARPA to withdraw its previous funding for AI projects.
1973: The “Lighthill Report” was published and written for the British Science Research Council. It was an evaluation of academic research in the field of AI, and the reason after the report was published, the UK government withdrew the funding for all but two universities involved in research in this field.
1974 – 1980: the first AI Winter happened. The early AI systems couldn’t do more than simple tasks like recognizing objects or understanding simple commands. Many people lost their enthusiasm and started to think that AI would never be able to do anything more than these simple tasks. Interest in AI wouldn’t be revived until years later.
1987 – 1993: The business community’s optimism about AI rise, even though it was quickly confronted by the reality of the limitations of AI systems. This would eventually end with another AI Winter from the late 1980s to the mid-90s. Some factors that influenced the decrease in interest in AI and a decrease in funding were the oil crisis, the growing realization that the promises of AI had not been met, and that the technology was not as developed as hoped.
What is an Enterprise AI Winter and Can One be on Sight?
As mentioned, Enterprise AI Winter refers to a period of reduced funding and investment in developing and deploying artificial intelligence technologies within businesses and organizations. This can occur for various reasons, internal (which we will look into in detail shortly) and external (economic downturns or shifts in government funding priorities).
Some of the indicators of Enterprise AI Winter associated with internal challenges of implementing AI in the enterprise indicate a possible slowdown:
- Lack of Data: AI systems and models often need large amounts of data to be trained and to function effectively. Data is a critical component of AI and machine learning, and organizations that do not have access to high-quality, relevant data or cannot manage and process it effectively are likely to face challenges in realizing the full potential of AI. According to some estimates, poor data quality and lack of data governance can result in as much as 80% of an AI project’s budget being wasted. Additionally, a survey conducted by Silo.ai listed the lack of data and shared data practices as leading challenges of scaling AI.
- Lack of specialized and skilled talent: To achieve success with AI, organizations need to ensure that they have access to people with the necessary skills and expertise. This includes having data scientists and data engineers who can build and deploy machine learning models, as well as domain experts who have a deep understanding of the business problem being addressed by the AI application. Additionally, organizations need the necessary infrastructure and tools in place to support the development and deployment of AI models, as well as the expertise to use these tools effectively. With these foundations in place, it can be easier for organizations to fully leverage the potential of AI to drive business value and generate a Return on investment (ROI). According to PwC´s 2022 Analytics and AI Survey, globally, 79% of the companies are already slowing down some AI initiatives because of the Limited availability of AI talent or planning to do so. Silo.ai report indicates the talent shortage is the second biggest challenge for enterprises adopting AI.
- Increased regulations and guidelines: The use of AI raises ethical concerns, such as potential bias, discrimination, privacy, security, transparency, and accountability of AI systems. To address these and other ethical concerns, many countries and organizations have developed guidelines and frameworks for the responsible use of AI. For example, the EU has developed the Ethics Guidelines for Trustworthy AI and is working on the upcoming AI Act, which provides a set of principles and guidelines for the development and use of AI in a way that is ethical, transparent, and accountable. Similarly, the US recently released the AI Bill of Rights, which establishes a framework for AI’s ethical development and use. Although regulations like the European AI Act are proposed for the right reason, once in place will impact the pace of Enterprise AI proliferation. According to some, such regulation might take as much time to adapt as the time spent establishing the company’s AI capabilities.
- Challenges of integrating AI with existing systems: AI systems often need to be integrated with existing enterprise systems, which can be challenging and require significant resources. Such challenges can result from the AI system not being compatible with the current IT infrastructure and data integration complexity because AI requires access to data from multiple sources within the organization, which can be complex and time-consuming. To mitigate these challenges, companies must reassess their current AI infrastructure, determine data needs, and develop modern data management plans and platforms for the AI systems to function and be used effectively across the organization. All contribute to increased IT and employee training costs, infrastructure complexity, and internal organizational friction.
- Cost vs. ROI: Implementing AI can be an expensive investment and if no value can be attached to that investment it might be challenging for an organization to justify further investments. The cost depends on a variety of factors, including the scope and complexity of the project, the size of the organization, and the resources and expertise required. The key factors that can impact the cost are hardware and infrastructure, data acquisition and preparation, software and licenses, talent and expertise, and finally training and support. Organizations must carefully evaluate AI projects’ cost and benefit to ensure they are worth pursuing. According to the evaluation on the internet, the cost for customer solutions can vary from $6,000 to $300,000 and more for custom AI solutions and from $0 to $40,000 per year for third-party AI software. Model training on the cloud could cost $2 per hour and more. On top of this comes all the other costs associated with people, the cost of developing MVP, the cost of implementing the complete AI solution in production, and finally the cost of maintenance and updates. An example of potentially extremely high-end cost, Netflix 2019 spent $1.5 billion** on technology from which a good portion was spent on AI. This cost of the course can be justified when increased value in terms of increased efficiency, enhanced decision-making, improved customer experience, increased competitiveness, and cost savings. However, measuring the value of AI is still a young discipline and is challenging from many perspectives.
One of the reasons according to Somil Gupta, Founder and CEO of Algorithmic Scale, is behind the understanding of the value creation when adopting and scaling AI. According to him, enterprises that can transform into more distributed, analytics-led, service-oriented business and operating models will realise the benefits of AI and eventually survive the winter. The rest will struggle to stay competitive and relevant, grow profitably and slowly fade away into oblivion.
“To understand the value realisation, let’s split it into value creation and value capture. When we observe the capability and relative maturity of AI use cases, there is potential value created despite several challenges with data. But this value can only be captured if we are able to translate these potential benefits into effective decisions, actions, and offerings. And this is where the challenge is – to integrate AI with the existing Operating Model, systems, processes, and ways of working. This integration is currently happening in the pockets of small teams who are trying to break away from the established norms and ways of working.”, explains Somil Gupta and adds that over the last two decades, organizations have successfully integrated and optimised their Enterprise IT systems into a tightly coupled, standardised and efficient machinery.
“So on one end, we have highly agile and experimental Data Science teams and on the other end, we have highly deterministic, rigid, and standardised business teams. And that is the conflict the organizations currently face in adopting AI. They cannot translate AI potential benefits into tangible, measurable outcomes. At the same time, the macroeconomic factors are creating new sources of value (new customer segments, user behaviors, and needs, markets and ecosystems, etc) and risks (wars, disruptions, regulations, etc). The good news is that we are already seeing many such ‘breakaway’ teams who are adopting Data and AI for innovative and distributed business and operating models to leverage the new growth opportunities and mitigate the new risks. They are defying the central mandate and as they become more successful, they are also inspiring others.”, says Somil Gupta.
The inability to measure results while increased costs related to AI systems can increase doubt in the potential such technology can bring to the company in the long run.
- Lack of understanding and buy-in: Many approach Enterprise AI as a technology issue when in the end, it is a change management topic. Change management plays a critical role in successfully adopting AI within an organization. It helps the organization to communicate the benefits and purpose of the AI project to employees, stakeholders, and other relevant parties; identify and address potential barriers to adoption, such as resistance to change or concerns about job security; train and support employees to ensure they can effectively use the AI system and understand how it fits into their existing workflows; monitor and assess the impact of the AI system on the organization and make any necessary adjustments to ensure its successful adoption.
- Explainability and interpretability: Many AI models are considered “black boxes” because it is difficult to understand how they make decisions. This lack of explainability and interpretability can make gaining trust in AI systems’ results difficult and be a high risk for organizations once strict regulations like the EU AI Act become a reality.
Looking at the indicators above, one can see similarities between the previous AI Winter cycles and a potential one coming, especially in Europe with its current approach to AI from a regulatory standpoint. However, contrary to the previous AI Winters:
- The technology is much more mature and is developing relatively fast,
- AI is already rooted in most enterprises, with at least one model for one use case,
- The number of use cases is already extensive, with proven ROI per use case,
- Organizations have already gone through their exploration phase with clear learnings,
- AI is already part of any significant digital transformation in both public and private sectors,
- There is a sufficient amount of data to support AI systems in production,
- The benefits of using AI are becoming much clearer for any senior executive,
- AI is already a critical competitive differentiator between functions, companies, industries, countries, and regions.
With that said, AI will continue to evolve; therefore, Enterprise AI Winter might not be happening soon. However, it is essential to note that the future of Enterprise AI will also be shaped by factors such as the development of new technologies, regulatory changes, and the adoption and acceptance of AI by organizations and society as a whole. The challenges listed above are important ones to prevail and can potentially cause a slowdown in investment. At the same time, organizations recalibrate their approach to maximising AI’s full potential.
Lastly, when we talk about Enterprise AI Winters, many agree that besides the indicators addressed above, it is hard to see AI Winter as before. Instead, we will see inequality because some enterprises, industries, countries, and regions will continue to lead the way, and others will follow. This AI divide can potentially impact geopolitical relations between regions in various ways. Some of the potential ways in which AI could impact geopolitical relations include:
- Economic impact: The development and deployment of AI can have significant economic implications, as it can drive economic growth and create new industries and job opportunities. However, it can also lead to job displacement and cause economic disruption. This can create tension between regions that are leaders in AI and those lagging.
- Military applications: AI can be used in military applications, such as developing autonomous weapons systems. This can create tension between nations with advanced AI capabilities and those without, as well as raise concerns about the potential for AI to be used in conflict.
- Privacy and security: AI systems can generate and process large amounts of data, raising concerns about privacy and security. This can create tensions between regions with solid privacy and security laws and those without.
- International trade: The deployment of AI can have implications for international trade, as it can change the balance of economic power between regions. This can lead to tensions between regions that are leaders in AI and those that are not.
The impact of AI on business and society will depend on how it is used and implemented. It is crucial to approach the development and use of AI responsibly and ethically, considering the potential consequences and impacts on both business and society.
Featured image credits: Yang Shuo on Unsplash