Question marks from the article above were still swirling round my head like stars round a concussed cartoon character when this article appeared in my feed. Written by the MIT Technology Review I think it offers an nuanced view of the environmental footprint of the big players.
Unlike the Andy Masley pieces the report factors in things like grid CO2e intensity. Energy generation in the states where big data centres are housed is around 48% more carbon intensive than the US average (and this might not be a coincidence).
It makes clear that we don’t know what the accurate energy cost of a single prompt because the main players (ChatGPT, Google, Anthropic) won’t engage with researchers or reveal relevant data.
At the end of the article Eliza Martin, a legal fellow at Havard makes the point explicitly “It’s not clear to us that the benefits of these data centers outweigh these costs”.
The most surprising assertion in the piece is that image generation appears to use less energy than text generation. Essentially large text models have many more parameters than diffusion models and a higher number of “steps” are required for text output than images. I’m sure I’m not alone in assuming that this would be the other way around.
What is very clear is that until we get more transparency in the industry, expect many more articles like the two above.
|