Climate Impact Of AI: Difference between revisions

Content added Content deleted
Line 10: Line 10:


===Deep Learning===
===Deep Learning===
* '''Models and training''': Many recent projects in deep learning have used increasingly large computational resources to create models. The compute resources used for the largest model training runs was shown to have doubled in size every 3.4 months from 2012-2018<ref>{{Cite web|url=https://openai.com/blog/ai-and-compute/|title=AI and Compute|date=2018-05-16|website=OpenAI|language=en|access-date=2021-03-27}}</ref>.
====Models and training====
* '''Inference''': Some estimates show the vast majority of energy (80-90%) used by deep learning models is spent on inference<ref>{{Cite web|url=https://www.forbes.com/sites/moorinsights/2019/05/09/google-cloud-doubles-down-on-nvidia-gpus-for-inference/|title=Google Cloud Doubles Down On NVIDIA GPUs For Inference|last=Strategy|first=Moor Insights and|website=Forbes|language=en|access-date=2021-03-27}}</ref><ref>{{Cite web|url=https://youtu.be/ZOIkOnW640A?t=5327|title=AWS re:Invent 2018 - Keynote with Andy Jassy|last=Jassy|first=Andy|date=|website=YouTube|url-status=live|archive-url=|archive-date=|access-date=2021-03-27}}</ref>.
Many recent projects in deep learning have used increasingly large computational resources to create models.
* '''Efficiency gains'''. Redundant training. Experiment tracking<ref>{{Cite web|url=https://towardsdatascience.com/deep-learning-and-carbon-emissions-79723d5bc86e|title=Deep Learning and Carbon Emissions|last=Biewald|first=Lukas|date=2019-06-24|website=Medium|language=en|access-date=2021-03-27}}</ref>. Smart hyperparameter selection

The compute resources used for the largest model training runs was shown to have doubled in size every 3.4 months from 2012-2018<ref>{{Cite web|url=https://openai.com/blog/ai-and-compute/|title=AI and Compute|date=2018-05-16|website=OpenAI|language=en|access-date=2021-03-27}}</ref>.

====Inference====
Some estimates show the vast majority of energy (80-90%) used by deep learning models is spent on inference<ref>{{Cite web|url=https://www.forbes.com/sites/moorinsights/2019/05/09/google-cloud-doubles-down-on-nvidia-gpus-for-inference/|title=Google Cloud Doubles Down On NVIDIA GPUs For Inference|last=Strategy|first=Moor Insights and|website=Forbes|language=en|access-date=2021-03-27}}</ref><ref>{{Cite web|url=https://youtu.be/ZOIkOnW640A?t=5327|title=AWS re:Invent 2018 - Keynote with Andy Jassy|last=Jassy|first=Andy|date=|website=YouTube|url-status=live|archive-url=|archive-date=|access-date=2021-03-27}}</ref>.

==== Efficiency gains ====
Redundant training

Experiment tracking<ref>{{Cite web|url=https://towardsdatascience.com/deep-learning-and-carbon-emissions-79723d5bc86e|title=Deep Learning and Carbon Emissions|last=Biewald|first=Lukas|date=2019-06-24|website=Medium|language=en|access-date=2021-03-27}}</ref>

Smart hyperparameter selection


=== Data Centres ===
=== Data Centres ===