Definition
In brief, Artificial Intelligence is software that attempts to simulate human thinking. Rather than executing a list of instructions, the software runs with a purpose/goal. For it to work, large amounts of labelled training data must be ingested and then analysed for correlations and patterns, which help make predictions/decisions. AI requires specialised hardware and software (such as Python, R, Java, Julia & C++).
Differentiation from humans
Defining human intelligence before understanding how it differs from AI is essential. While AI can become quite clever in certain areas, it can never replace God-created human intelligence because we need to understand human intelligence ourselves.
God created us with a predominance in one or two of these eight kinds of human intelligence or learning preferences (see Howard Gardner’s Theory Of Multiple Intelligences for more info):
In the early 1950s, Alan Turing, a young British polymath, suggested that humans use information and reason to solve problems and make decisions. One of his papers published in 1950, Computing Machinery and Intelligence (https://redirect.cs.umbc.edu/courses/471/papers/turing.pdf), discusses how to build theoretically intelligent machines and test their intelligence.
Five years later, at Dartmouth College (New England, USA), the proof of concept was initiated and brought together top researchers from various fields to discuss AI. Expectations from 1957 were high, predicting that by the mid-1970s, there’ll be computers with the general intelligence of an average human. Today, our human brain and intelligence are still a mystery, for we are fearfully and wonderfully made by our Creator God (Psalm 139:14).
While computer software can simulate God-given learning styles and reasoning processes, AI software also tries to emulate cognitive skills like reasoning, learning and self-correction, with varying degrees of success. It is better at some learning styles (like logical/analytical), it’s moderate at others (visual/special) and does poorly at others (like solo or social).
While God did infuse in humanity the desire to create and invent, it seems (so far) that replicating the thinking process of a human brain is unattainable. I can only be in awe at God’s power, knowledge and creativity when we start to figure out how our brains work and try to emulate it in software.
AI for Church-Use
I asked two of the most popular generative AI: Google Bard (based on the Language Model for Dialog Applications) and ChatGPT (based on Generative Pre-trained Transformer), how it sees AI helping churches in the present and near future. These are some of the suggestions (my comments in italics):
AI Tomorrow
Let us remember that artificial intelligence algorithms are still just algorithms. Modern AI advancements, such as neural networks, have derived inspiration from the architecture of the human brain but are incapable of thinking like humans. They are simply a complex set of commands for a computer to follow and do not work like the human brain.
Scientist Adam Zewe from MIT highlighted in a paper published in May 2023 (https://neurosciencenews.com/ai-judge-rules-23238/) the shortfall of today’s AI in replicating human decisions, attributed mainly to the data that the models are trained on (in the context of “rule violation” detection). The researchers suggest improving dataset transparency, matching the training context to the deployment context (similar to the calibration certification of a speed camera that can be requested by the culprit in the case of a speeding ticket). Implementing such a framework of regulations and laws would go a long way to diffuse much of the perplexity and unease that the general public might have regarding disseminating AI in our current and future lives.
While Artificial General Intelligence (AGI) is predicted to become popular around the 2040s, Artificial Super Intelligence (ASI) is far from being created with today’s technology. In the eventuality of ASI being achieved in the future (perhaps as early as the 2050s), the best thing to do is to prepare for it through regulation and laws. Current AI, or the possibility of a future ASI, would be dictated by its goal(s) and objective(s). A crisis might occur if ASI’s goals differ from humanity’s, causing ASI to surpass human barriers and “take over the world”.
However, my eschatological reading of my Bible suggests a different “end” of the world, where “every eye shall Him coming on the clouds”. I look forward to that glorious ending and a new beginning, don’t you?
Dr Daryl Gungadoo, (Lecturer at Newbold College, Berkshire, UK) in “conversation” with OpenAI’s ChatGPT & Google’s Bard