Can AI bring out the best of us?
Can AI bring out the best of us?
by Pete Phillips, Premier's Head of Digital Theology
(This article was originaly published on Premier Digital)
Both of the words in the phrase “Artificial Intelligence” are misleading, at the moment. Artificial sounds like it doesn’t exist and it does. And intelligence suggests something organised and sentient which it isn’t.
AI makes us think of films like “Terminator” or “iRobot” or “The Matrix” where machines take over the world and seek to destroy humanity (and pictures like the one above are not very helpful!). But such use of AI seems still to be a long way down the road. Some people would refer to this type of AI as Artificial General Intelligence. Ray Kurzweil has written about the importance of the moment when all knowledge is available online and computers can network together to achieve sentience or independent thinking – the moment of singularity.
Indeed, award-winning Beth Singler, a theologian at Homerton College in Cambridge has explored how unhelpful these images are. She talks rather about how AI seeps into the contemporary mindset
For now, we might want to use a less sci-fi based understanding of AI as Machine Learning or Algorithmic Problem Solving. An algorithm is a set of rules which tell a computer to analyse a set of data in order to solve a specific type of problem. The computer uses the data it has been fed to learn how to do the task it has been set.
This form of AI has led to some great advances for computing in general – much of our climate change predictions are based on machine learning algorithms; drug companies are making use of algorithms to find new forms of antibiotics and other drugs; Google maps is a form of machine learning AI which takes all the data from people running the app as a routefinder and uses that data to show where the next traffic jam is going to be and then re-routes everyone down an alternative route, sometimes into an impassable ford!
But at the same time, machine algorithms can still get things terribly wrong.
Algorithms are used by social media companies to send you adverts – adverts on Facebook, Google, Amazon. Such ads take your data, your input into your smartphone or computer, and feed you back ads for the things it think you want. Indeed, Facebook has recently said it is its aim to send you the ad before your type in the request for information about the product. In other words, algorithms are about creating an identity who you are, what you like, what your friends like, what brings you pleasure. Facebook's AI creates a “persona” – a representation which can be used to focus ads in your direction. They all do it...see the documentary The Social Dilemma for a critique of this process.
But at the same time, data sets which train algorithms can be plagued with human bias.
If we use all the police records for a certain city to predict where crime will happen, then the data sets will be based on historical records which don’t really understand the social conditions which gave rise to those data points. So, areas of high crime might be based on the outcomes of deprivation, unemployment, segregation and/or racism. It doesn’t mean that such areas would remain crime hotspots if society dealt with those issues. But because the algorithm finds a string of offences in one area, it may well send police to that area and police will find crimes.
It is true that algorithms can be fed more complex data sets but already the internet is full of facial recognition systems that cannot really do very well with black faces, with google algorithms which send back results for “cute babies” without any babies of anything other than Caucasian origin. I just checked this on Google for myself (I don't allow cookies and I was in incognito mode on my browser) and of the first 50 images which Google sent in response to my request for “cute baby images”, I received 45 white babies and 5 others from a non-Caucasian background. 90% of the world’s cute babies are white??? Well, I know all babies are cute but you get my point.
In her book, Atlas of AI, Kate Crawford argues that Artificial Intelligence is neither artificial not intelligent. It is embodied in expensive computer systems, in concrete buildings and server farms, in expensive technologies which take up vast quantities of extremely rare minerals often mined in deplorable conditions. AI is seen as a replication of human intelligence. But no one knows how the brain works, no one knows how we gain self identity or how human intelligence works.
Even if we could create a neural network which would replicate the brain, will we ever create an intelligent being when the brain only works because of body memory, muscle memory, and automated systems which happen throughout our organs. The AI industry wants to focus on our brains but Christian and Jewish faith knows the important of every part of the body, not least the heart.
Crawford talks of AI as a registry of current power – as the latest form of capitalism which seeks to extract wealth from the earth and turn it into money for the rich at the expense of everyone else. in his talk 'You are the Product' Aral Balkan talks of humans being farmed for their data – that if anything is free, we are the product. In his book Attack Surface, Cory Doctorow reminds us that governments want our data to sell it to AI developers – to make money but also to make power.
If it’s really that bad should Christians be involved in AI at all?
Steven Croft, the Bishop of Oxford and a fantastic blogger commentator on the church and technology, was an important member of the House of Lord’s working party on AI which developed the report: AI in the UK: Ready, Willing and Able?. My own comments here follow the same kind of arguments given in this much larger and detailed official report about how we define AI, what its benefits are and what its problems may be.
As part of the process, Bishop Croft developed and disseminated what became called The 10 Commandments of AI. The key element of such commandments was that they were all pro-human, all focussed on the benefit of humanity rather than for the AI industry or for technology firms:
- AI should be designed for all, and benefit humanity.
- AI should operate on principles of transparency and fairness, and be well signposted.
- AI should not be used to transgress the data rights and privacy of individuals, families, or communities.
- The application of AI should be to reduce inequality of wealth, health, and opportunity.
- AI should not be used for criminal intent, nor to subvert the values of our democracy, nor truth, nor courtesy in public discourse.
- The primary purpose of AI should be to enhance and augment, rather than replace, human labour and creativity.
- All citizens have the right to be adequately educated to flourish mentally, emotionally, and economically in a digital and artificially intelligent world.
- AI should never be developed or deployed separately from consideration of the ethical consequences of its applications.
- The autonomous power to hurt or destroy should never be vested in AI.
- Governments should ensure that the best research and application of AI is directed toward the most urgent problems facing humanity.
Jason Thacker, amongst many other theologians in the States, is wary of AI. Jason Thacker serves as Associate Research Fellow and Creative Director at The Ethics and Religious Liberty Commission of the Southern Baptist Convention. He is the author of The Age of AI: Artificial Intelligence and the Future of Humanity with Zondervan
In his important and immensely readable book, and in regular posts in the Christian media, Jason outlines the trends around AI and the application of theological/scriptural principles to AI. I am really looking forward to his forthcoming book: Following Jesus in the Digital Age.
In a recent Baptist Press article Jason argued that the following four trends would dominate 2022:
- Content moderation – Jason thinks that this will become an increasingly contentious issue in 2022 as politicians particulary within the democratic Global North seek to protect free speech while at the same time call on social media companies to censure hate speech, racism and bullying. The question is whether you can do both at the same time in increasingly divided societies. How will religious freedom fair in such an environment.
- Misinformation/fake news - exploring the problem though information overload, Jason argues that we often cannot determine whether news is fake because of the sheer mass of alternatives on offer – what is truth in 2022? Jason argues that one of the most countercultural things we can say in “I don’t know”.
- Pervasive Surveillance – how much will AI be used to gain more and more opportunity to surveil the public space. Already we hear of new cameras on roads in the UK able to identify 15000 drivers using their mobile phones while driving; already we know of Chinese surveillance of the public which scores people according to their compliance with Government directives. Will governments in the US and EU seek to legislate in this area – a digital bill of rights – perhaps similar to Steven Crofts Ten Commandments for AI.
- Digital Authoritarianism – using the example of the Chinese Communist Party which uses digital tech to award compliant citizens and punish unruly citizens, Jason reminds us of the Party’s genocide of the Uyghyr muslims. But China is not the only example and states are not the only users of such methods. As digital is used to supress, perhaps we also need to think of ways in which digital might also be used to set people free of such oppression?
Premier Digital and AI
Over the next month, Premier Digital are going to focus much of our output on exploring AI and it’s potential impact on the Church and on our world.
We’re going to look at AI: Faith, Ethics and the Future.
We’ve commissioned videos and blogposts from around the world to explore what AI means for different people in different contexts. And at the end of the month we’re going to have a webinar bringing some of the key people together to discuss AI and the Church.
We hope that you’ll enjoy the series.
Perhaps tell your church pastor that she should be reading this month’s blogposts and watching the videos. Send them to your bishop! Send in your recommendations of books and reading material or youtube videos that we could share in the blogposts.
Let’s do this together…on digital…to make us better human beings.
Pete
Login to comment.