This piece was written by Ben Daniel-Thorpe for the “The Future, and You” destination in October. As a data scientist with a PhD in Mathematics, Ben is an expert on AI and I’m extremely grateful to have had his “first-hand” perspective on the topic. This short read should help ground your awareness of the practical realities, risks and benefits of AI (in juxtaposition to my fanciful waffling!).
(~3 Minute Read)
When you think of artificial intelligence (AI) you probably think of Arnie in Terminator, in which he plays an AI that has near human-level intelligence (known as artificial general intelligence). But today’s AI is much more narrow than this. For example, there exists AI that can classify a tumour as cancerous or not. There is no clever physical robot here, but simply a computer program that takes an image as input and performs the narrow task of classifying a tumour. You cannot suddenly start using this program to diagnose other diseases – it only does one thing.
As a practitioner in the field, I think that statistical learning is a better name than AI for the technology we have at the moment. What this statistical learning does is take data (lots of it) and figures out the statistical patterns that exist in that data. In particular, malignant tumours may tend to be less round than benign ones* and our computer program can discern this by looking at lots of examples. Alternatively, the dataset might be a country’s Facebook likes and the computer program can statistically learn whether a given person is likely to vote or not in an upcoming election (see Cambridge Analytica). We see in the above two examples, both the power and dangers of AI.
We often find that AI both creates and solves problems. For instance, many people are concerned about the role AI plays in surveillance technology and it is in fact statistical learning that allows China to carry out routine surveillance of their population through CCTV. But the US and UK did not need AI to carry out mass surveillance on their populations (see Snowdon revelations). Indeed AI could form the basis of a more targeted system that protects privacy whilst ensuring the security services get the information they need (see ThinThread). Further, we find social networks such as Facebook beginning to employ statistical learning to identify bad actors, such as peddlers of fake news and election tamperers.
Two other AI technologies that are coming very soon are driverless cars and robots to work in warehouses. Both of these are examples of statistical learning applied to physical tasks that have traditionally been carried out by humans. A lot of people will lose their jobs because of this technology. Is this immoral? The fact is, these technologies are literally creating more wealth – we can have food delivery and transport systems that are near autonomous, which has clear benefits, especially in a pandemic-struck world. The real question is: how do we redistribute this additional wealth? Do we allow it to go to the already-rich shareholders of the companies that develop this AI, or do we accept as a society that we simply require less human labour than we once did and support people to work less through progressive taxation and ideas such as universal basic income?
People often ask me if I think AI is dangerous and I normally say that yes, it is dangerous, but so is electricity and I wouldn’t want to live without that. Like electricity, statistical learning is a civilisation-altering tool that can be used for good or evil. It is vital therefore, that we legislate to force AI to be used responsibly, because it is here to stay, whether you like it or not.
Are you excited or afraid by the prospect of an AI-driven world? Do you think a robot will steal your job? How far will AI go, and what comes ‘next’? Share your thoughts below!