Following the recent hearings with Big Tech CEOs, how can we tell whether each innovation will serve humanity well with technological advancements moving forward at hyper speed? This was the question the Center for Humane Technology (CHT) shared their thinking on in an enlightening webinar. If you’ve got the time, I encourage you to watch the recording on the CHT YouTube channel. For anyone short on time, I’ve written up a summary of the key points from the webinar as I thought they were too good to be left in the digital wasteland!
A time of progress and peril
As Randima Fernando, the co-founder of the Center for Humane Technology, states right at the beginning of the webinar, we’re entering a time of progress and peril. On the one hand, we’re seeing a global decrease of poverty, but on the other hand we’re seeing increasing inequality everywhere in the world. This inequality is often created through “runaway” technology that creates a “wisdom gap”. This is the increasing divide between the complexity of the global issues such as the climate crisis, rise of misinformation, global financial risks, nuclear escalation, pandemics and much more, and our ability to make sense of these incredibly complex issues.
Thriving as a design choice
One of the key messages from the webinar was that if we truly want to design a web that serves humanity without harming the planet, we must incorporate ‘thriving’ as a central consideration in our design choices. CHT define thriving as ‘our deepest inner compass’. Thriving requires us to reflect on, set, and then act on our intentions. Since technology profoundly shapes our future, it’s crucial to envision what thriving means for our users when designing digital products and services.
In order to build a sustainable and equitable world, we need to design technology to help users be in touch with their deepest and wisest intentions instead of using persuasive techniques to implant unwise intentions in their minds. Any technology aimed at truly serving humanity must be rooted in a profound understanding of thriving. This idea is so well aligned with the work that we’ve been doing around the humane web and also with the theme of the upcoming edition of Branch magazine – more on that here).
Dosage matters
At the end of last year, I shared an article about the importance of pacing as one of the key principles of a more humane web. So when Randima shared his thoughts on the importance of dosage, it truly resonated with me.
His point is that technology doesn’t know how to really know and serve our intentions. What it does instead is that it uses historical data of what we’ve watched and clicked on, but these choices were manipulated in the first place through the addictive design of the technology we use everyday. Therefore, we can’t say that technology knows us better than we do ourselves. It pretends to know what we want, instead of helping us serve our true intentions.
As a result, many of us end up in a digital zombieland where we mindlessly scroll through our feeds and watch hours and hours of videos about all sorts of things that may not always bring much value to our lives. In fact, algorithms serve our intentions so poorly that they hijack our time and attention, trying to get us to click on an ad. Here’s the thing, though. The likelihood of a user clicking an ad on Facebook, for example, and actually buying something from the advertiser is as little as 0.12% (!!!). And in return for that, we open our minds to all sorts of harm to mental health, democracy and manipulation.
Technology is NOT neutral
Some people love to say that technology is neutral and its impact depends on how we use it. But nothing in the world is neutral (except maybe for Switzerland 🙃). The way technology is engineered is rooted in how humans interact with each other. There are three key considerations that form a loop:
- Society shapes humans through shared values, power structures and norms
- Humans shape technology through incentives, personal values and cognitive biases
- Technology changes society through economic incentives, behavioural conditioning, extreme and emotionally-triggering content prioritisation and who gets heard.
So if you think about it, technology couldn’t be further from being neutral. It has all our biases and historical (dis)agreements built into its very structure and shape. Over time, the web has been engineered not to serve humanity, but to exploit our vulnerabilities for the benefit of tech companies, be it financial gain or user data accumulation.
Don’t let metrics replace values
Probably the most thought-provoking idea from the webinar was that metrics have been sneakily replacing our values. Think about it for a second. Likes on social media became a signifier of our self-worth. GDP became the value of nations across the globe. What we measure becomes what we value. But should it be this way?
Center for Humane Technology“Quantitative objectivity is often confused with ethical neutrality.”
When we form our plans and strategies, we focus on metrics. We look into analytical tools to understand what is most important to and popular among our users. What did they click on, how long did they keep scrolling, where did they go next, did they come back? Did they buy something from us?? But maybe these metrics are not telling the full story. Maybe we need a second metric to help us understand the full picture. Enter: anti-KPIs.
Humane Tech Tip: Use Anti-KPIs
Product teams often lean on Key Performance Indicators (KPIs) as relatively simple measures of success. However, since these metrics are likely influenced by the manipulative and addictive design of technology we use, maybe we need an opposing metric. CHT recommends matching each KPI with an anti-KPI or a ‘measure of failure’. According to them, this would help us ensure that the original KPI is not improving by causing harm elsewhere.
An example could be a very popular KPI in many companies, engagement. How many people engaged with us, with our content, with our services? An anti-KPI to that could be, for example, avoiding spreading misinformation so that we are not breaking down reality in the name of growth.
The bottom line is that attention-seeking technology easily hijacks our cognitive biases.
So, how can we tell whether technology will serve humanity well?
Whether we like it or not, we all design the digital world that is a part of our lives, and we ought to take responsibility for its impact. When we go through the design process, we should ask ourselves some key questions, such as:
- The Big Picture
- How does the product contribute to societal well-being beyond immediate user benefits? Does it help humanity thrive?
- Are there any long-term consequences of the product’s adoption on society as a whole?
- Thriving
- How does the product measure and track the wellbeing of its users?
- Are there specific indicators or milestones that demonstrate users’ thriving as a result of using the product?
- Values
- What values does the product claim to centre?
- Are the stated values of the product aligned with its actual impact and actions?
- How do the values embedded in the product influence user behavior and decision-making?
- Externalities and incentives
- What are some potential unintended consequences of using the product, and how can they be mitigated?
- How do financial incentives or market dynamics shape the product’s development and distribution?
- Respect human nature
- Does the product respect or harmfully exploit human nature?
- In what ways does the product accommodate users’ natural behaviours and preferences?
- Are there any features or design elements that exploit psychological biases or vulnerabilities?
- Build shared understanding
- How does the product facilitate communication and collaboration among users?
- Are there mechanisms in place to ensure that information is effectively shared and understood among different stakeholders?
- Fairness & justice
- How does the product address disparities in access and usage among different demographic groups? Is anyone being left behind?
- What steps can be taken to ensure that the product serves all users equitably?
- What can we improve to help people (and planet) thrive?
- How can the product be redesigned or optimised to better align with sustainable and ethical principles?
- Are there any specific areas where the product’s impact on people and the planet can be enhanced or minimised?
Perhaps the most important question then is not whether technology will serve humanity well, but whether we will take responsibility as humans, to ensure that we use the technology in the most ethical and humane way.
Technology is an imperfect tool, and its service depends largely on those who wield it. We are imperfect, but as long as we act ethically and with sustainability in mind, then it’s up to us to ensure that it serves people and planet well.