AI Will Change the Work of Society Organizations
Civil society organizations and citizens alike must pay close attention to the already on-going AI revolution and be aware of the impact it will have on their work and, more broadly, on their lives. Here, at The Good Lobby, we made it our mission to raise awareness about the risks and opportunities already associated with AI so as to equip both with the necessary knowledge and tools to capitalize on the opportunities while minimizing the risks.
To this end, we put together a list of 10 specific ways in which AI will impact civil society organizations, citizen movements and citizen-lobbyists.
In search of Good AI
From the EU to China and from Microsoft to Google, the whole world is talking about “good AI”, that is to say AI which is used for the benefit of humanity as a whole. Nevertheless, there seems to be little consensus over what exactly that means.
NGOs and citizens must, and hopefully will, play a key role in determining what “good AI” looks like. This role should not be limited only to shaping the policy around this technology but also extend to ensuring that this principle is adhered to in the long run, with all iterations made in the best interest of citizens as the main objective.
The importance of working with start-ups and tech companies
Start-ups are at the forefront of AI development. While states are paying close attention to AI and some have been heavily investing in its development, to ensure that AI works for humanity, focusing on dialogue with public institutions alone will not be sufficient. To have the best impact, NGOs and citizen-lobbyists will have to work with AI developers, to actively engage with start-ups and focus their advocacy work on this field as well.
Democracy and the rule of law are also subject to changes in the age of artificial intelligence. The spread of fake news – targeted or not – on different social media platforms is one example. Using similar means to influence the voting intentions of undecided citizens – such is in the Cambridge Analytica case – is another.
What this now infamous example showed is how democracy can be put under threat by private companies that know how to utilise data and AI. With elections that can change the world taking place virtually every year, such threats to democracy will not be a limited exception but rather a common occurrence.
New forms of discrimination
Discrimination is a wide-ranging issue which is manifested in different ways and tackled by numerous organizations. The age of AI will bring a new dimension to it – the so-called algorithmic discrimination. Although there is a tendency to believe that decisions reached by algorithms are free of any flaws and therefore superior, the reality is that their quality is influenced by the engineers who created the algorithms and the data which they use. In consequence algorithms can – and do – borrow the biases of their creators and those existent in the data used. As algorithmic decision making is used more widely, citizens should be aware of the risks and organizations should mobilise in order to support those who are discriminated in this way.
Transparency and accountability
Algorithmic decision making is not problematic only because of biases but also due to its opacity. It is not uncommon that the very people who have created an algorithm sometimes are unaware, due to its complexity, of the way in which it has reached a decision. This is often described as “black box” AI – where we know the input and the output, but not the journey that led from the former to the later. There are already calls for AI to become more transparent and easier to explain and this is one of the main principles on which the EU has focused its approach. Given how important transparency is for accountability, this is likely to be a key issue in the future. Further, there is also need to demystify AI and make its implications easier to understand for all citizens so that they can know what to expect and have a say in it.
Data protection will be ever more important
Data protection has been getting a lot of attention in the past few years, especially since the adoption of the GDPR which has radically changed the way in which the general public approaches this issue. AI adds a new dimension to this discussion. As AI systems require it in large amounts, data will become a currency which private companies and states alike will put considerable value on. There is a need for citizens to be aware of this, as well as of the ways in which their data can be used, sometimes to their detriment. One concrete example is algorithmic price discrimination, where depending on the data collected certain sites can show different prices to different customers.
The law will change
All the matters discussed so far will certainly crystallise into legislative changes. Existing legislation will be updated to accurately reflect the changes brought by AI while new legislation will be passed to fill in any existing gaps. In the EU, the White Paper on AI released in February 2020 is the first step towards a legislative proposal that will, to some extent, regulate AI. At the moment the way in which any future legislation will look like is heavily influenced by private companies who are already very vocal about what they would like to see and why. Civil society organizations and citizens must use their voices and play a role in the way AI is regulated, both at European level and at national level.
Education, Jobs and Automation
Although the numbers are still up for debate, it is clear that AI will create millions of jobs while simultaneously leading to the loss of many others. The jobs created will require widely different skills than the ones lost. As a result, the job market will change drastically, and the education system must follow suit in order to enable future generations to thrive in the age of AI. Further, education and support systems must also be put in place to help those who lose their job to automation get requalified.
Risks and opportunities
For each organization and each citizen, AI will bring a number of risks – some which have been discussed above but also numerous opportunities. It is important to be aware of both. Without this awareness, all organizations run the significant risk of having their hard work be rendered less effective because of missing the AI dimension. In terms of opportunities, AI will bring about new strands of work, new sources of funding and numerous new ways to do better work more quickly.
Engagement and Fundraising
AI is a tool that can certainly be used for good, and here at The Good Lobby this is what we will be advocating for. Charities and NGOs can use AI to better engage with their supporters and boost fundraising. An excellent example is Yeshi, the chatbot created by Charity: Water to interact on Facebook Messenger with potential supporters, telling the story of a young girl living in Ethiopia and the difficulties she encountered in getting clean water for herself and her family.
More recently a report funded by the Bill & Melinda Gates Foundation – AI4Giving – has explored the ways in which AI could impact the non-profit sector from enhancing fundraising to better understanding what motivates donors to support an organization.