1 October 2018

Is artificial intelligence a force for good?

By

At a Tory conference set to be dominated by Brexit, it is refreshing to take in a discussion about the bigger picture – where human society is going in the coming decades.

Yesterday’s Conservative Home fringe event on artificial intelligence, titled “AI – a force for good?” was a reminder that beyond questions about tariffs, trade deals, regulatory equivalence and customs checks, there are much deeper issues about the direction we are heading.#

The debate on artificial intelligence is perhaps one of the most misleading and skewed parts of the public policy conversation. As Anthony Walker, chief executive of industry group Tech UK, pointed out: “The most common image used for an AI story is the Terminator”.

James Cameron’s films – now nearly three decades old — have done more than any other cultural product to colour the public perception of the perils of smart machines.

On top of that there’s a danger that AI gets thrown into a blancmange of trolls, hacking, fraud and misinformation that combine to make tech seem more menacing than liberating.

Part of the problem here is that AI is already so common, so integral to modern life, that we either don’t realise its benefits or just take it for granted. From the health service to banking, online commerce and social media, machines are executing previous human functions and making decisions independently in all manner of different areas.

As the chair of the Culture Select Committee, Damian Collins, told yesterday’s audience: “Even if we’re not familiar with how AI works we already live in a world based on tech that is based on machines learning.”

The Downing St adviser turned think-tanker Will Tanner was also keen to stress that beyond the alarmism, AI already plays a crucial role in keeping us safe – the UK Border Force, for instance, uses machine learning to screen both goods and potentially dangerous people arriving in the UK.

None of this is to be blasé about the problems of handing over responsibility to computers, but it is to say that it is still up to us humans to shape the way the technology is used. We need only to look at China, with its nefarious-sounding social credit system, to see how in the wrong hands, tech can aid and abet repression.

The risk, as Tanner underlined, is that without sensible systems in place to monitor AI, it will “lose the legitimacy” it needs among the public.

The UK also needs to up its game – Singapore currently spends three times as much as we do on AI, while China spends 25 times more. Given how completely pervasive this kind of tech will be in the years to come it seems a no-brainer for a post-industrial “knowledge economy” such as our own to be piling into this brave new world with gusto.

Provided, of course, that the political class — including those assembled at Conservative Party Conference in Birmingham this week –can one day put some thought to something other than Brexit.

John Ashmore is Deputy Editor of CapX.