Artificial Intelligence (AI) is a rapidly evolving technology that shows potential to offer many benefits to society but also poses potential wide ranging social and environmental risks. There are reasons to suggest that a pause on AI development and usage would be the most responsible course of action at this time, but it is clear that this is not going to happen as big tech firms race to dominate the emerging market and organisations of all types attempt to capitalise on the short term benefits offered by these new technologies.
Here at Wholegrain Digital, we aim to help purpose led organisations use design and technology to achieve their missions, while also helping create an internet that is better for people and the planet.
We therefore have a responsibility to ensure that our clients are able to continue thriving in a marketplace where their competitors are embracing new AI tools, and also to lead the way in responsible use of these new technologies.
We have therefore set out the following guidelines for use of AI within our own business and with the charities, businesses and public sector organisations that we work with, as well as for use by other organisations exploring their own approach to AI usage. These guidelines will continually evolve and we welcome input to help improve them.
First, let’s define what we mean by AI
AI technology is becoming available in the form of dedicated tools as well as increasingly being embedded into a large range of digital services, often without the end user even knowing.
At present, dedicated AI tools that are relevant to our business tend to fall into the following two categories, although tools are rapidly emerging that blend multiple approaches to analyse information, perform tasks and generate new content:
- Large Language Models (LLMs) – These are tools that are able to process and generate written language in tasks such as answering questions, generating written content or interpreting information. Some typical use cases are summarising meeting notes, creating transcriptions from audio files or helping to write documentation. Some of these tools can also work with coding languages and mathematics to understand and generate information on request.
- Generative Media – These are tools that can generate rich media on demand, such as images of a particular subject, audio (including the spoken word and music), as well as artificially generated video content. Large Language Models that generate new written content technically also fall into this category.
In the longer term we should expect these technologies to develop into Artificial General Intelligence (AGI), meaning technology that is as capable as humans in all intellectual tasks and eventually to Artificial Super Intelligence (ASI), meaning more capable than the most intelligent humans on all intellectual tasks. Both of these milestones will radically change society in ways that we are unable to predict and as such these guidelines are intended for use of AI prior to these milestones being reached.
OK, now we know what we mean by AI, here are our guidelines for responsible usage in seven simple principles.
7 principles for responsible AI usage
Whenever using AI tools directly, follow these guidelines to minimise negative social and environmental impacts. Where possible, also use these guidelines in cases where AI technology is embedded into another tool such as a design software or transcription service, particularly in cases where the tool is used to help us gather human insights.
These guidelines should also be used to support clients in their own use of AI and, where appropriate, highlight cases where they may have overlooked important considerations in their own work.
Principle 1: Mindfulness
AI tools carry social and environmental risks and so while we will not avoid them entirely, we should use them mindfully. In practice, this means considering whether the intended use case is necessary and whether the potential benefits seem to outweigh the risks.
We will likely never have an objective measure with which to make this judgement, but the point is to pause and think about any unintended consequences and any alternative options before applying AI technology to a particular situation.
Principle 2: Human Oversight
AI tools are already highly effective at generating information that appears accurate at first glance but they are also prone to introducing false information. Therefore, any information obtained from AI tools should always be carefully checked by a human who is a subject matter expert and able to verify the accuracy of the information provided. Failure to do so increases the risk of spreading misinformation.
Note that this poses challenges for public facing uses of AI such as search tools and chatbots, which are inherently unsupervised and therefore need to be considered with extra care.
Principle 3: Screening for Bias
AI tools have bias baked into their models as a result of the way that they are trained.
We all have conscious and subconscious biases and it’s important that our use of AI does not compound this.
Take steps to minimise the effects of bias within any AI tools used. For example, tests can be run on a new tool to screen for signs of bias before using the tool on a real project. Ideally include a diverse range of minds in analysing the outputs of any tests to help identify bias from different perspectives.
Efforts to minimise bias will be imperfect but will help minimise any negative impacts.
Principle 4: Privacy
Data privacy relating to AI tools can be opaque and ambiguous. To minimise risks associated with this, personal data should not be entered into any AI tools unless we have documented evidence that the tool is GDPR compliant and is using the data responsibly.
The only exception to this is where personal data is already in the public domain, such as references to public figures or historical events.
Principle 5: Transparency of Authorship
We are moving into a world where it is increasingly difficult to know whether content is created by humans or not. All content delivered by our team should be primarily authored by a human and there should always be a human who takes responsibility for the content that they deliver.
If there is ever a case where there is a legitimate reason to publish content that is primarily AI generated, whether it be text, audio, imagery, or video, then this should be clearly highlighted for transparency.
Principle 6: Intellectual Property
Many people are becoming worried about AI copying their work and creating derivatives of it. To some extent, this problem is inherent in the way that generative AI tools are trained on large databases of existing work. However, we should aim to minimise this risk by never referencing the work of any living people (e.g. writers or artists) without their prior permission. We should provide our own creative direction to shape an output that aligns with our own vision.
Principle 7: Avoiding Fake Media
Combining the issues of privacy, transparency, and intellectual property is the possibility of a person’s likeness being used to generate artificial images. For example, artificial images that appear to be photographs of a real person. Even if we have a legitimate use case for generating such content and we publish it transparently, there is a risk that those images then get duplicated and spread elsewhere in a different context as misinformation.
To minimise these risks, avoid generating realistic media (photos, audio, and video) based on the likeness of real individuals, present or historical.
How can we select AI tools responsibly?
It can be hard to know which AI tools to use and in which circumstances, especially when trying to assess the ethical rather than the practical considerations.
Some questions that can help you assess the ethics of using a particular AI tool are as follows:
- Are non-AI tools or methods available to yield the same result with similar efficiency?
- Is the tool GDPR compliant?
- How energy efficient is it?
- How reliable and accurate is it?
- How will we safeguard against bias and misinformation?
- Are there publicly known ethical concerns about the tool?
At this stage it is likely that we will not be able to find tools that have perfect answers to these questions, but at least by considering them and documenting them, we can make informed decisions.
Let’s create a culture of responsible AI use
Our hope is that these guidelines will help us to make more informed ethical decisions about where we do and don’t use AI, as well as how we use it here at Wholegrain Digital. In doing so, we hope that we can continue supporting our clients to create positive impact in the world and contribute to a culture of responsible AI use.
In addition to our own work and that of our clients, we hope that these guidelines offer some clear and practical guidance to other organisations to help them navigate the challenges of using AI responsibly. We encourage you to use these guidelines to support your own work and kindly ask that you credit us where appropriate.
As this field is rapidly evolving, we will continually update these guidelines as we learn more about the technology, it’s risks and potential solutions, so watch this space and keep checking back.