International Finance
Technology

‘Control artificial intelligence before it controls us’

There are growing concerns that technologies are developing outside of any system of internationally-agreed regulation or ethics

The all-consuming Brexit argument currently overwhelming political discourse is stifling crucial discussion of artificial intelligence technologies that could transform our lives for the better or even threaten our very existence. Policymakers have completely failed to engage with the debate over how to develop AI technologies in a controlled way that will deliver benefits for humanity rather than jeopardising its safety.

There are growing concerns that technologies are developing outside of any system of internationally-agreed regulation or ethics to ensure that AI applications do good rather than cause harm. Influential scientists, such as Professor Stephen Hawking, have warned that the future for AI is unpredictable, with the potential for technology to have a transformational impact, for good or for bad.

AI technologies are developing very rapidly and the potential is hugely exciting, but we must manage this science with great care; it is no exaggeration to say that uncontrolled AI could in the future pose an existential threat to humanity, but you would never guess that was the case from the indifference to this debate shown by policymakers with little interest in anything other than the minutiae of the Brexit negotiations.

Here are five examples of questions that policymakers should now be engaging with:

How do we prepare our young people for work in the AI era?
AI technologies will transform the labour market, taking responsibility for any task involving repetitive or predictable tasks, but we continue to educate our young people as if the world of work is not about to change fundamentally. The skills young people will need to succeed in the future will be very different – creativity, in particular, will be a valuable commodity.

How do we manage the data on which AI will depend?
AI technologies depend on a constant stream of data from which they learn in order to perfect their decision making, but data is an increasingly regulated area. The challenge will be to balance data protection and security concerns against the need for a free flow of data to underpin AI development. The UK may even have an opportunity to secure competitive advantage here following Brexit, given the EU’s exacting data regulation.

Should we allow AI to kill?
Governments and military strategists around the world are already making substantial investments in AI-powered weapons systems that will reduce the need for human intervention in the process of selecting targets and then hitting them. In effect, decisions over who to kill, how and when, will be delegated to a computer programme.

How will we ensure AI is unable to exert its superiority over mankind?
Current AI technologies are task-specific – highly skilled at carrying out one role, but unable to learn to do other work. That will change over time, however, to the extent that AI machines capable of amassing vast amounts of generalised intelligence could eventually outpace mankind – and asset their superiority with hostile acts against humanity.

How do we build international consensus on the regulation and ethics underpinning AI technologies?
While UK policymakers must engage with the AI debate, this will be a global phenomenon requiring collaboration and co-operation between governments. Many countries have reached similar conclusions about how to regulate developments in areas such as embryology; an equivalent discussion about AI is now crucial.

‘Not science fiction’
You might think that sophisticated AI with the potential to overwhelm humanity is the stuff of science fiction, but these technologies are developing at a very rapid pace and it’s crucial that we begin to discuss the implications right now; there are many positive aspects of AI, and we’re already seeing businesses harnessing these technologies to work more effectively, but there are dangers too.

Unfortunately, the UK’s political climate currently allows little room for discussion of anything other than the Brexit debate; this is getting in the way of our country playing its part in the debate about how to develop robust governance structures around advances in AI technology.

That puts the UK in an unacceptable position: without engagement right now, we run the risk of missing out on many of the benefits that AI can deliver and of failing to counter the threats potentially posed by these technologies.

Haakon Overli is the co-founder of London-based Dawn Capital

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.