Hyperight

There’s greater cost of deploying AI and ML models in production – the AI carbon footprint

There’s greater cost of deploying AI and ML models in production - the AI carbon footprint
Bookmark

At the end of last year, we witnessed a story that rattled the tech scene and roused heated discussion around AI and AI ethics. Namely, Timnit Gebru, former Co-lead of Google’s ethical AI team, co-authored a paper that led to her leaving Google. One of the core risks outlined in the paper was the AI carbon footprint of the development and deployment of large language models.

The paper titled “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?” brings to the front some inconvenient truths related to a core line of Google’s research and raised questions about AI that might have caused Google concern, writes MIT Technology Reviews. In its focus, the paper lays out the risks of ever-larger language models, such as BERT, its variants, GPT-2/3, others most recently Switch-C – AI models trained on staggering amounts of text data, and China’s ‘Wu Dao’ AI model, which is 10X bigger than GPT-3.

One of the four main risks of large language models the paper outlines is a critical risk of deploying AI models into production that, it seems, is rarely given a second thought in the ambitious AI initiatives – environmental and financial costs.

There’s greater cost of deploying AI and ML models in production - the AI carbon footprint

The carbon footprint of training AI and ML models

Gebru’s paper highlighted the bitter truth about the industry: Today everyone is talking about deploying AI models into production, but no one or not enough people are talking about the cost of compute power and maintaining the ML models in production. Training AI models consumes a lot of computer processing power, and hence a lot of electricity. Furthermore, as the amount of data is exponentially growing, so is the AI carbon footprint expanding and putting a huge burden on the environment and impacting climate change.

No wonder the AI industry is often compared to the oil and gas industry, bearing a striking similarity to its high value and lucrativeness and its environmental impact.

In the paper, Gebru and her colleagues refer to another paper by Emma Strubell on the carbon emissions and financial costs of large language models, which revealed that the energy consumption and AI carbon footprint of the large language models have been exploding since 2017, as models have been fed more and more data. While another study referred to in Anthropocene Magazine found that computing power used in deep learning grew 300,000-fold between 2012 and 2018. If this pace of growth continues, AI could have a severe climate impact.

After performing a life cycle assessment for training several common large AI models, Strubell and her colleagues from the University of Massachusetts, found that the process can emit more than 626,000 pounds of carbon dioxide equivalentnearly five times the lifetime emissions of the average American car (and that includes the manufacture of the car itself). 

More specifically, Strubell’s paper examined the model training process for natural-language processing (NLP) – the Transformer, ELMo, BERT, and GPT-2 – models that have reached many groundbreaking performance milestones in the last years. First, the research included training each model on a single GPU for up to a day to measure its power draw. Then the researchers used the number of training hours listed in the model’s original papers to calculate the total energy consumed over the complete training process.

It’s worth noting that the research included training each model once, while in practice, models are usually trained many times during research and development. Strubell and her colleagues found that the compute and environmental cost of training grew proportionally to model size. The results showed the most costly model was BERT, with a carbon footprint of roughly 1,400 pounds of carbon dioxide equivalent, which is close to a round-trip trans-America flight for one person. GPT-3, the latest generation deep learning models designed to produce human-like language, requires an amount of energy equivalent to the yearly consumption of 126 Danish homes and creates a carbon footprint equivalent to travelling 700,000 kilometres by car for a single training session, describes Anthropocene Magazine.

Although experts had had some assumptions about the AI’s environmental impact, no one imagined that the actual problem is of such flabbergasting magnitude.

The cost of deploying AI models in production

AI’s dual role in the climate change

On the other hand, AI can be deployed to reduce the effects of climate change, such as creating smart grid design, developing low-emission infrastructure, and modelling climate change predictions. 

Doubtless, AI’s destiny is to have a dual role in climate change, but this doesn’t mean that actions can’t be taken to mitigate or reduce the carbon footprint of AI. As the issue of carbon emissions and electricity costs from deploying AI models gains more traction, it attracts the attention of important stakeholders. For example, thousands took to the streets in September 2019, joining the global climate strike to shed light on big tech’s collaboration with fossil fuel companies. Another instance is the Tech Workers Coalition that united employees of Amazon, Google, Microsoft, Facebook, Twitter on a march to demand from their employers a promise to reduce emissions to zero by 2030, to cease contracts with fossil fuel companies, stop funding climate change deniers, and to stop the exploitation of climate refugees and frontline communities, reports Nature.

AI’s dual role in the climate change

Quantifying carbon emissions

The challenge in taking proactive action to reduce AI’s carbon footprint and climate impact is how to quantify energy consumption and carbon emission, as well as to make this information transparent. Part of the problem is the lack of standard of measurement and the vagueness of the cost of building complex AI and ML systems in the social perception, Nature emphasises. 

To narrow the quantification challenge, researchers have worked on a study to quantify the carbon emissions of machine learning. The researchers found out that the emissions obtained during training a neural network model were related to the location of the training server and the energy grid it uses, the length of the training procedure, and the hardware on which the training takes place. To help other researchers and practitioners, they created an emissions calculator to estimate the energy use and the associated environmental impact of training ML models.

Apart from the calculator, one of the researchers Alexandra Luccioni, suggested several actions to reduce AI carbon footprint. “Using renewable energy grids for training neural networks is the single biggest change that can be made. It can make emissions vary by a factor of 40, between a fully renewable grid and a full coal grid,” states Luccioni.

However, to make AI less polluting, it needs to become more of a mainstream conversation, including researchers being transparent about how much carbon dioxide was produced by their research, and reusing models instead of training them from scratch and by using more efficient GPUs.

Quantifying carbon emissions

Another team of Stanford University researchers also tried to track and get accurate measures of AI carbon emissions. First, they measured the power consumption of a particular AI model, untangling each training session and taking into account the power for shared overhead functions, such as data storage and cooling. Then they translated power consumption into carbon emissions, which depend on the mix of renewable and fossil fuels that produced the electricity. 

The Stanford study demonstrated that the location of an AI training session could make a big difference in its carbon emissions. For example, the researchers estimated that running a session in Estonia, which largely relies on shale oil, will produce 30 times the volume of carbon as the same session would in Quebec, which relies primarily on hydroelectricity, affirm Stanford University.

However, to fully grasp AI’s carbon impact, it is not enough to scrutinise the compute costs incurred by training large models. A huge hurdle for quantifying carbon emissions is tech companies’ resistance to sharing data about their energy mix and their undisputed dominance in building data centres across the world.

A possible solution would be to offer tax incentives to cloud providers to open data centres in places with hydro or solar energy, not in areas where the grid is mainly coal-fuelled. This would make a huge impact to reduce the carbon footprint, states Alexandra Luccioni.

carbon emission of training AI models

Turning Red AI into Green AI

The greater demand for AI innovation and performance in recent years comes with even greater computer power. The trend of using massive compute power to achieve better results was named “red AI” – a term introduced in 2019 by Roy Schwartz in a paper he co-authored.

Red AI presents the practice where, for a linear gain in performance, an exponentially larger model is required, which can increase the amount of training data or the number of experiments, thus escalating computational costs, and therefore carbon emissions. What is even worse, Schwartz concluded that the vast majority of papers he analysed prioritised accuracy over efficiency, falling under the “red AI” category.

Three factors categorise AI research as red, Schwartz’s study found:

  • the cost of executing the model on a single example; 
  • the size of the training dataset, which controls the number of times the model is executed; and 
  • the number of hyperparameter experiments, which controls how many times the model is trained.

Opposed to this, Schwartz and colleagues advocate for ‘green AI’, which they defined as “AI research that yields novel results without increasing computational cost, and ideally reducing it”, opposite to red AI, Nature affirms.

As to concrete actions that researchers can take to more efficient and sustainable AI, the Copenhagen Centre on Energy Efficiency states that environmental sustainability should be considered one of the principles towards responsible development and application of AI. 

Gabriela Prata Dias, Head of the Centre, and Xiao Wang, programme officer, suggest several steps to apply cleaner and greener AI practices.

  1. The definition of green AI needs to be actionable for all the relevant stakeholders in the industry, rather than be an abstract concept to most technical experts.
  2. Environmental standards should be developed to ensure the mitigation of environmental impacts, and green AI certifications could be introduced to facilitate the industry process for promoting green AI development.
  3. A practical industry framework and guidelines should be created that support green procurement of AI technologies that would support organisations and companies using and deploying AI technologies in looking for environmentally friendly AI practices.
  4. Governments should consider the long-term impacts in setting up regulatory frameworks and legislation that would legally address transparency and sustainability in AI development.

Building data centres is also a great solution for minimising AI carbon footprint, where the waste heat is re-deployed back into the grid. But the players dominating the market should also strongly consider the energy mix of the location where they select for their data centres. In addition, there’s the need for the proper checks and balances, including regulations, to retain some form of local agency over these infrastructures, reflects Roel Dobbe for Nature.

Looking on a more micro level, Deepika Sandeep, AI scientist heading the AI and ML programme at Bharat Light & Power (BLP), a clean energy generation company, advises making judicious use of deep learning. Since training is the phase that consumes a lot of computational power and hence leads to a higher carbon footprint, Sandeep says they’ve minimised their training cycles. When the model is in production, they do re-training every three to six months. And in the end, she points out that not all problems require deep neural networks and deep learning architectures. Instead, choosing less compute-intensive AI might go a long way for our environment.

Add comment