Issue #65a

View this email online

 

Curiously Green

 
 
 
 
Welcome to issue 65a of Curiously Green.
 
For the next couple of months we are going to be testing some new formats for the newsletter. From feedback and engagement analysis it’s become apparent that a tweak to the way we share our Curious goodness is required.

Practically, this means two things.

Firstly we are splitting the newsletter in half and sending two emails a month. The first will have fewer links and articles and will concentrate on a couple of pieces of content or an over-arching topic.

The second will provide more of a round up and the news, views and resources we’ve curated between issues.

Once you’ve seen both emails in the new there will be an option to opt out of receiving one or other of the emails if desired. It wouldn’t be very digitally sustainable to send out emails that you didn’t want to read now would it?

As ever feedback, comments and discussion and requests are always gratefully received and I hope you enjoy this experimental issue.

Andy Davies

Curiously Green Manager

 
 
 

Some contrasting views on the environmental impact of generative AI

 

The following articles left me questioning a lot about what I do and don’t know about generative AI (Gen AI). Anyone who’s been reading my posts or newsletters recently will know I’m a Gen AI skeptic. That said I try to be conscious to read the alternative view. These two articles offer a pleasing balance between AI boosterism and climate realism. They drill down into what we do know and what we can only speculate on.

I’d be interested to hear what side of the argument people currently find themselves on.

 
1 - Apparently using ChatGPT isn't bad for the environment...
 

1 - Apparently using ChatGPT isn't bad for the environment...

 

For a couple of weeks in May I couldn’t move for social media posts explaining why using ChatGPT isn’t bad for the environment. A post on Andy Masley’s substack “The Weird Turn Pro” went “small v” viral. The post provided a cheat sheet based on a earlier, longer article titled “Using ChatGPT is not bad for the environment”.

The cheat sheet offers a comprehensive set of counter arguments to many of the environmental criticisms of GenAI platforms like ChatGPT.

When I read it, it pulled me up short. The arguments are cohesive, well written and cited from third party sources. It really made me question some of my biases and what my negative preconceptions were based on.

My ultimate takeaway (which is influenced by the second article below) is that we don’t know enough about the energy and environmental costs of GenAI to make concrete conclusions. There just isn’t enough transparency in the industry. I’m am very suspicious as to why this is.

It’s also an interesting example of why the provenance of an artifact is important (my GCSE history teacher would be very proud). The author of the piece is the director of an Effective Altruism (EA) think tank in Washington DC. My feeling is that anyone who subscribes to an EA philosophy tends towards a pro (and sometimes problematic) AI stance.

Regardless the post is well worth a read, especially as it counterbalances the next article so nicely.

 
2 - MIT Technology Review thinks it might actually be bad for the environment...
 

2 - MIT Technology Review thinks it might actually be bad for the environment...

 

Question marks from the article above were still swirling round my head like stars round a concussed cartoon character when this article appeared in my feed. Written by the MIT Technology Review I think it offers an nuanced view of the environmental footprint of the big players.

Unlike the Andy Masley pieces the report factors in things like grid CO2e intensity. Energy generation in the states where big data centres are housed is around 48% more carbon intensive than the US average (and this might not be a coincidence).

It makes clear that we don’t know what the accurate energy cost of a single prompt because the main players (ChatGPT, Google, Anthropic) won’t engage with researchers or reveal relevant data.

At the end of the article Eliza Martin, a legal fellow at Havard makes the point explicitly “It’s not clear to us that the benefits of these data centers outweigh these costs”.

The most surprising assertion in the piece is that image generation appears to use less energy than text generation. Essentially large text models have many more parameters than diffusion models and a higher number of “steps” are required for text output than images. I’m sure I’m not alone in assuming that this would be the other way around.

What is very clear is that until we get more transparency in the industry, expect many more articles like the two above.

 
 
 
How you can get involved this month
 
 
 
 
Dystopian image of the month
 

Image of a Billboard from an AI company encourgaing us to Stop Hiring Humans

Is AI a job enabler or a job displacer?

Adverts like this appeared across London this month to coincide with London Tech Week.

Is it the future?

Is it satire?

Are we all in an episode of Black Mirror?

 
 
 
This issue of Curiously Green is curated and written by Andy Davies