Artificial intelligence is coming for philanthropy

Par Rhodri Davies , Founder of Why Philanthropy Matters
04 July 2023

You probably feel like you have heard a lot recently about artificial intelligence, like ChatGPT – the powerful chatbot from OpenAI that has gained 100 million users since its launch in November 2022, and which has reportedly become the fastest-growing consumer internet app ever. Recent weeks have also seen the release of GPT-4; a new version of the underlying Large Language Model (LLM) for the chatbot that is set to improve it still further. And it’s not just ChatGPT: Microsoft has been putting significant resources into a partnership with OpenAI that will allow it to embed ChatGPT into its other software, like Bing and Teams. Google, meanwhile, has launched its own ChatGPT rival, BardAI, that will be integrated into the company’s search services.The potential of ‘AI for good’ 

These chatbots are worth paying attention to in their own right, but even more importantly they signal the likely coming of a wider wave of ‘generative AI’ tools that will be able to create text, images, music, and video from simple prompts in everyday language. Some see this as a cause for excitement, others as a cause for concern, but most agree that this technology is going to have a transformational impact across education, creative industries and many other areas of our society and economy. Philanthropy will undoubtedly be affected too, but what form will this take?

To find an answer maybe we should start – as has become de rigeur in any article about this topic – by asking ChatGPT itself:

‘It is difficult to predict exactly how the development of conversational AI will affect philanthropy in the future. However, it is possible that the use of AI in philanthropy could lead to increased efficiency and effectiveness in areas such as fundraising, grant-making, and program evaluation. For example, AI chatbots could be used to interact with donors and guide them through the donation process, or to analyse large amounts of data to identify patterns and trends that could inform philanthropic strategy. AI could also potentially be used to automate certain tasks or processes, freeing up staff to focus on more high-level, strategic work.’

Like many of the responses from the current version of ChatGPT this seems perfectly credible, yet firmly at the level of a solid-yet-unimaginative B-minus essay. So we will quickly need to look beyond this starting point if we are to get a deeper sense of the potential impact of AI on philanthropy.

 The potential of ‘AI for good’ 

One point worth picking up in what ChatGPT says is the mention of ‘analysing large amounts of data to identify patterns and trends’. This is certainly where most of the prominent examples of AI being used in the non-profit world to date have focussed, though rather than using machine learning to inform general philanthropic strategy (as ChatGPT suggests) the applications have so far come in the form of predictive tools within specific cause areas. This requires vast amounts of data in order to ‘train’ algorithms, so there has been an inevitable skew towards the relatively small number of areas of non-profit work in which the availability of data is not a limiting factor. For example conservation, where there are large existing datasets of photographic and video content to draw on (e.g WildTrack’s project using AI to identify cryptic animal prints); or medical research, where medical imaging data is used to train algorithms which are then able to predict the risk of long-term illnesses and degenerative conditions with a high degree of accuracy (e.g. Moorfields Eye Charity’s work developing AI systems that can spot early signs of glaucoma and macular degeneration).

The challenge when it comes to using similar approaches to inform more general philanthropic strategy is that – for now at least – we don’t have the same richness of data when it comes to philanthropy itself. For it to become a realistic possibility to train specialist algorithms that can guide philanthropic decision-making (which I have previously suggested we might term ‘philgorithms’), we will need a far greater quantity and quality of data. That includes data on where funding currently comes from, where it is distributed, which organisations are working in given fields, and how effective they are. (And presumably, some agreed on measures of ‘effectiveness’ as well, which is a hugely contentious point of debate in its own right).

It is now widely appreciated that algorithmic systems tend to reflect historical statistical biases in their training data, unless we actively guard against this… when it comes to relying on tailored algorithmic recommendations to shape our philanthropic choices, we should be concerned about what sorts of biases they may exhibit.

The examples that exist so far of ‘AI For Good’ definitely offer an intriguing glimpse of how the technology might be put to work solving social and environmental problems. However, many civil society organisations (CSOs) and funders may struggle to see the relevance of this to their work because it seems so far beyond their current level of engagement with technology. For these organisations, the immediate impact of AI is much more likely be felt in terms of how it affects their internal processes and the operating environment in which they work, rather than through them developing and deploying AI tools of their own. Not a great deal of thought has been given so far to this aspect of AI’s impact on the philanthropy sector (with a few notable exceptions such as Beth Kanter and Allison Fine’s book The Smart Nonprofit), but the arrival of ChatGPT should be the prompt (if you will excuse the pun) for us all giving it far more attention.

Could a robot write my grant application?

The most obvious place that generative AI tools like ChatGPT are likely to have an impact within CSOs is in creative activities, since the most straightforward use cases revolve around generating content. This will almost certainly include producing text and images for websites, social media and marketing materials; but it could also extend to things like writing entire grant applications or fundraising bids. Will this then allow us to ‘automate processes, freeing up staff to focus on more high-level, strategic work’, as ChatGPT suggests? This would certainly chime with Kanter and Fine, who argue that one of the major impacts of AI for CSOs will be the ‘dividend of time’ it brings to human staff. And in the long term this seems like a realistic assumption: just as the introduction of supercomputers into chess in the 1990s resulted in a form of human/machine hybrid ‘centaur chess’, it is probable that the future will bring some version of ‘centaur philanthropy’, in which humans and AI systems work in tandem.

For now, however, the potential utility of generative AI tools is still limited by the fact that they can be error-prone. This is annoying when it comes to creative tasks (although occasionally also quite funny), but it presents a far more serious problem when it comes to information-based tasks, where accuracy may be essential, and the risks associated with failure can be far higher. (Google recently found this out to its cost, after an inaccurate answer given by its new Bard AI chatbot in a promotional video resulted in more than $120 billion being wiped off the stock value of parent company Alphabet). It is not even just honest mistakes that are the problem, either: ChatGPT has shown a worrying tendency to paper over any cracks in its ability to answer queries by simply making things up – cheerily offering up credible-sounding references to sources that don’t exist or stating facts that turn out to be complete fabrications. For this reason alone, it is unlikely that humans will be cut out of the loop just yet, as most organisations will surely think it prudent to have human oversight of any AI outputs for a while to come.

Using AI to learn about the organisations philanthropy works with 

Of course, even if CSOs themselves remain cautious about relying on information from ChatGPT and its rivals, that doesn’t mean that everyone else won’t use them to find information about CSOs. A lot of the discussion about the impact of ChatGPT has centred on its potential to transform how we search for things on the internet, by shifting from a list-based approach (where we get a ranked series of results to our queries) to an answer-based one (where we are able to focus in on a single result, or handful of results, through an interactive process of question and response that allows us to refine what we are looking for). This could radically alter how we make choices about where and how to give: both at the individual level and at the institutional level (if grantmaking organisations start using the tools in this way too). The most fundamental difference is that even when a list is ranked, you always have the option of digging deeper by looking further down the list; whereas when you have tailored recommendations, the temptation will be to accept them at face value (since getting past the initial answers is likely to require a lot of extra work in crafting additional prompts). It therefore becomes even more important to understand the nature of the algorithms underpinning the conversational interfaces, and how they arrive at their answers.

There is also the risk of developing dependency: if you become over-reliant on a particular tool, it can make you less resilient to future events – as many organisations that have based all of their fundraising or communications on a small handful of social media channels have come to realise.

It is now widely appreciated that algorithmic systems tend to reflect historical statistical biases in their training data, unless we actively guard against this. Thus, when it comes to relying on tailored algorithmic recommendations to shape our philanthropic choices, we should be concerned about what sorts of biases they may exhibit. It is easy to imagine, for instance, how an algorithm that relied on information about what people have done in the past to guide recommendations about where they should give in the future would end up favouring mainstream causes and well-known non-profit brands, leading to less-popular causes and lower-profile organisations losing out even more than they do already. And even if the algorithms are not necessarily biased in these ways, there will still be the challenge of how to optimise for them effectively. Most CSOs have only just got their heads round traditional Search-Engine Optimisation (SEO), so if a whole new field of Conversational Interface Optimisation is about to open up this will place even more strain on skills and resources in the sector.

The risks for civil society posed by algorithmic bias are not limited to how it will affect the giving choices of donors and grantmakers, either. Algorithmic processes are also increasingly being used by financial institutions and regulators for tasks such as risk rating and credit scoring. Historically there have been many issues with civil society organisations being perceived as particularly high risk, and therefore finding themselves falling foul of regulators or being excluded from mainstream banking services. If these mistaken perceptions about CSOs get hard-coded into algorithms that become the de facto methods of making decisions, this will present even greater challenges in the future. At least with human-made decisions there is some possibility of appeal, and some level of accountability (albeit not always very much). When it comes to automated decisions that are made by opaque ‘black box’ algorithms, however, it may be entirely unclear to anyone exactly why the decision was taken, and where accountability lies if the decision is found to be incorrect. Any civil society organisation that ends up on the wrong side of such a decision may find itself in a Kafkaesque nightmare where it is almost impossible to get answers, let alone redress.

Bias in AI systems will also have much wider repercussions across society, so it is vitally important that philanthropic funders and CSOs understand how it is likely to affect the people and communities that they work with and adapt their funding and activities accordingly. But as well as dealing with the symptoms of algorithmic bias, we must also be willing to challenge its underlying causes. Like all other aspects of technology, the development of bias in algorithmic systems is not simply an inevitability that we must passively accept. It reflects active choices on the part of governments, academics and the private sector to develop technology in particular ways. And these are choices that may need to be challenged if we want to change the trajectory of development and ensure that technologies such as AI end up benefitting society rather than harming it.

Risks of AI beyond bias

The risks associated with algorithmic bias are important to understand, but they are not the only reason for CSOs and funders to be wary about jumping in headlong when it comes to AI. On a pragmatic level, there is also the risk of developing dependency: if you become over-reliant on a particular tool, it can make you less resilient to future events – as many organisations that have based all of their fundraising or communications on a small handful of social media channels have come to realise. If those platforms subsequently change their terms of use, start charging for services that you have come to rely on being free, or start favouring certain types of content in a way that disadvantages you, what do you do then? Similar concerns apply to AI; because if civil society organisations become dependent on tools like ChatGPT, they will cede a large amount of control to technology companies that are driven by commercial motivations, which more than likely will not align with the motivations and goals of civil society.

There is already some evidence of where this kind of misalignment of values between technology companies and civil society might occur. Critics claim, for instance, that generative AI tools are often built on the unacknowledged work of human creators, and in some cases, this has strayed over into outright copyright infringement. The UK-based photo and art library Getty Images has already launched a lawsuit against Stability AI, the developer of the AI art generator Stable Diffusion, for using its images without a license. Meanwhile, investigative reporting from Time magazine back in January uncovered the fact that workers in the Kenyan offices of the tech company Sama were paid less than $2 per day to undertake often traumatic content moderation work to address issues with ChatGPT’s precursor, GPT-3, without which ChatGPT itself would not have been possible. If more cases like this come to light, it will raise challenging ethical questions for civil society organisations about whether to use these tools at all.

There are clear reasons for caution about AI, and civil society would do well to avoid getting caught up in the current mad scramble surrounding ChatGPT and other generative AI tools. However, simply refusing to engage is not the solution either. As AI systems increasingly come to be used in many other areas of our lives, it is an unavoidable reality that they will have an impact on CSOs. The danger is that if CSOs approach this entirely passively, AI then becomes something that happens to them, rather than something they can harness effectively or shape. Instead, we must aim for funders and CSOs to play a more active role: by demonstrating how AI can be used in ways that benefit society; by highlighting the unintended consequences of implementing it carelessly; and by ensuring that they have a voice in debates about how to shape the technology so that it brings benefits, rather than harms, to our society.

Rhodri Davies is the founder of Why Philanthropy Matters.

The original version of this article was published on Alliance Magazine’s website in March 2023

Artificial intelligence is coming for philanthropy


This article is part of the July 2023 special edition:  Philanthropy and AI. You can find more here.

philanthropy and artificial intelligence