|
This month saw a number of long and weighty articles about the politics of technology and big tech firms. They brought to mind this piece from last year by web designer and artist Miriam Eric Suzanne.
It’s a forthright and angry essay about the harms of GenAI and the politics of big tech CEOs. It was written in early 2025 at a time when there was a rush to work out how to use AI ethically and responsibly while the platforms involved aligned themselves with the Trump regime.
I still see these “ethical AI” offerings and webinars promoted today but perhaps less often than this time last year. In my circles I see a little more nuance in the discussions. The rush to adoption has perhaps become more cautious on both sides of the political divide.
This hesitancy was the subject of a long and semi viral post from technology writer Dan Kagan-Kans last month, titled “The left is missing out on AI”
“As a movement, it (the left) has largely refused to engage seriously with AI, ceding debate about a threat and opportunity to the right”
The general thrust of the piece is that “the left” is only engaging with the technology on a surface level. Lefty critics have bought the argument that LLMs are just sophisticated autocomplete machines. They are not to be taken seriously, let alone adopted into businesses. Essentially “the left” (yes, all of them/us delete according to political persuasion) think that the technology doesn’t live up to the hype and are missing out and ceding ground to the right. Moreover this lack of engagement with the technology is actively doing harm.
When I first started learning about LLMs I’ll admit that I stood in the “this is just Clippy 2.0” camp. I saw hallucinations and errors when I tried out ChatGPT and other platforms. I read about the lack of impact that early adopters experienced. I raised my eyebrows when I read about firms firing and then attempting to rehire staff after experiments with AI agents fell flat. But now, the technology is improving and the pull to use it feels greater. Even so, like many others I can’t ignore the huge risks and harms that GenAI represents, be they social, political, economic or environmental. I am a lefty after all.
I think Brian Merchant wrote the most compelling rebuttal of Kagan-Kans piece, “Actually the Left is winning the AI debate”. He points to the fact that the general public remains more concerned than enthusiastic about AI. More significantly he points out that much of the policy work that is putting AI guardrails and protections in place is coming from left leaning politicians. Basically the left is engaging with AI. It is engaging with more than just the technology, it is engaging with the socio-economic implications of AI as well.
What it boils down to is that the problem isn’t the technology, which may or may not turn out to be societally transformative. It’s the people (and their politics) who are in charge of the technology and the companies they run that are the issue.
Anthropic might be the current darlings of the AI scene, benefitting from the QuitGPT campaign mentioned in the last issue. But even as they trade on being less problematic than ChatGPT they are rolling back safety pledges in favour of trying to win the AI race. Away from the large US based models, EU based platforms like Mistral have been touted as an alternative option. But in this report by Clément Pouré and Soizic Pénicaud (free English language version on signup) points out that while “sovereign” EU models might not share the same politics as their US counterparts, they still use the same problematic data scraping methods as much of the rest of the industry.
With digital technology playing such a huge role in our lives at the moment, across business, conflict and the economy, there is no separating politics and technology, whichever way you lean politically.
|