AI in the UK: ready, willing and able?
#WakeUp
😈💩👎
AI is older than you think #Fact
Chapter 1: Introducing artificial intelligence
“We propose that a two-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer”.17
1.So went the proposal for the first academic workshop on the newly minted field of ‘artificial intelligence’ in September 1955. The challenge of replicating human intelligence in a machine may not have been solved over the course of one New Hampshire summer, but 63 years later, aspects of that dream are beginning to fulfil their promise.
2.Over the years, this quest has produced its fair share of successes and failures, but the past decade has produced another wave of excitement, mostly centred on the use of artificial neural networks and deep learning algorithms. These techniques have allowed machines to consistently replicate human abilities, such as the visual recognition of objects and faces, which had hitherto proved resistant to conventional computing. The prospect of a wide variety of human abilities soon being replicable by machines has in turn generated both excitement and anxiety in equal measures, as predictions about the impact of artificial intelligence (AI) have rapidly multiplied.
3.Every sentence in the example at the start of this report described a world driven and mediated by AI, featuring applications in use today or which will be available imminently. AI has moved out of the realms of science fiction and into our everyday lives, working away unnoticed behind the scenes. Yet just as electricity, or steam power before it, started out with particular, often somewhat niche, uses prior to gradually becoming fundamental to almost all aspects of economic and social activity, AI may well grow to become a pervasive technology which underpins our daily existence. Electrification had many consequences: unprecedented opportunities for economic development, new risks of injury and death by electrocution, and debates over models of control, ownership and access, to name just a few. We can expect a similar process as AI technology continues to spread through our societies. This report will examine some of the implications, both good and bad, of what could prove to be a defining technological shift of our age.
Our inquiry
4.The House of Lords appointed this Committee “to consider the economic, ethical and social implications of advances in artificial intelligence” on 29 June 2017.18 From the outset of this inquiry, we have asked ourselves, and our witnesses, five key questions:
How does AI affect people in their everyday lives, and how is this likely to change?
What are the potential opportunities presented by artificial intelligence for the United Kingdom? How can these be realised?
What are the possible risks and implications of artificial intelligence? How can these be avoided?
How should the public be engaged with in a responsible manner about AI?
What are the ethical issues presented by the development and use of artificial intelligence?
It is the answers to these questions, and others, on which we have based this report, our conclusions and our recommendations.
5.We issued our call for evidence on 19 July 2017, and received 223 pieces of written evidence in response. We took oral evidence from 57 witnesses during 22 sessions held between October and December 2017. We are grateful to all who contributed their time and expertise. The witnesses are shown in Appendix 2. The call for evidence is shown in Appendix 3. The evidence received is published online.
6.Alongside our oral evidence programme, we conducted a number of visits. On 13 September, we visited DeepMind, an artificial intelligence company based in King’s Cross, London. On 16 November, we visited Cambridge and met with the staff of Microsoft Research Lab Cambridge, and with two start-ups working with AI, Prowler.io and Healx. We also met academics at the Leverhulme Centre for the Future of Intelligence, an interdisciplinary community of researchers studying the opportunities and risks of AI over coming decades. On 20 November, we visited the BBC to meet staff of the Blue Room, a media technology demonstration team exploring ways audiences find, consume, create and interact with content. Finally, on 7 December, the Committee visited techUK to participate in a roundtable discussion with companies working with artificial intelligence in the United Kingdom. We are grateful to all concerned. Notes of all of these visits are contained in the appendices of this report.
7.In the course of our inquiry, we were trained by ASI Data Science on how to programme a neural network, and we received a private briefing from the National Cyber Security Centre. Our Chairman met, informally, with Carolyn Nguyen, Director of Technology Policy for Microsoft.
8.The members of the committee who carried out this inquiry are listed in Appendix 1, which shows our declared interests. Throughout the course of our inquiry we have been fortunate to have had the assistance of Dr Mateja Jamnik as our specialist adviser. We also engaged Angélica Agredo Montealegre, a PhD student at King’s College London, as another specialist adviser for part of the inquiry. Angélica was commissioned to research historic Government policy on artificial intelligence in the UK. This work has informed our recommendations, and we have published it as an appendix to this report. We are most grateful to them for their contribution to our work.
Defining artificial intelligence
9.There is no widely accepted definition of artificial intelligence.19 Respondents and witnesses provided dozens of different definitions. The word cloud (Figure 1) illustrates the diversity of the definitions we received, and shows the prominence of a few key words:
Figure 1: Definitions of artificial intelligence
10.The debate around exactly what is, and is not, artificial intelligence, would merit a study of its own. For practical purposes we have adopted the definition used by the Government in its Industrial Strategy White Paper, which defined AI as:
“Technologies with the ability to perform tasks that would otherwise require human intelligence, such as visual perception, speech recognition, and language translation”.20
11.Our one addition to this definition is that AI systems today usually have the capacity to learn or adapt to new experiences or stimuli. With this caveat, when we discuss AI in the following report, it is with this definition in mind.
12.Many technical terms are used in this field, the most common of which are summarised in Box 1.
Box 1: Common terms used in artificial intelligence
Algorithm
A series of instructions for performing a calculation or solving a problem, especially with a computer. They form the basis for everything a computer can do, and are therefore a fundamental aspect of all AI systems.
Expert system
A computer system that mimics the decision-making ability of a human expert by following pre-programmed rules, such as ‘if this occurs, then do that’. These systems fuelled much of the earlier excitement surrounding AI in the 1980s, but have since become less fashionable, particularly with the rise of neural networks.
Machine learning
One particular form of AI, which gives computers the ability to learn from and improve with experience, without being explicitly programmed. When provided with sufficient data, a machine learning algorithm can learn to make predictions or solve problems, such as identifying objects in pictures or winning at particular games, for example.
Neural network
Also known as an artificial neural network, this is a type of machine learning loosely inspired by the structure of the human brain. A neural network is composed of simple processing nodes, or ‘artificial neurons’, which are connected to one another in layers. Each node will receive data from several nodes ‘above’ it, and give data to several nodes ‘below’ it. Nodes attach a ‘weight’ to the data they receive, and attribute a value to that data. If the data does not pass a certain threshold, it is not passed on to another node. The weights and thresholds of the nodes are adjusted when the algorithm is trained until similar data input results in consistent outputs.
Deep learning
A more recent variation of neural networks, which uses many layers of artificial neurons to solve more difficult problems. Its popularity as a technique increased significantly from the mid-2000s onwards, as it is behind much of the wider interest in AI today. It is often used to classify information from images, text or sound (see Figure 2).
Figure 2: Deep neural networks
When data is fed into a deep neural network, each artificial neuron (labelled as “1” or “0” below) transmits a signal to linked neurons in the next level, which in turn are likely to fire if multiple signals are received. In the case of image recognition, each layer usually learns to focus on a particular aspect of the picture, and builds up understanding level by level.
Source: ‘New Theory cracks open the black box of deep neural networks’, Wired (10 August 2017): https://www.wired.com/story/new-theory-deep-learning/ [accessed 8 March 2018]
13.We have also chosen to refer to the AI development sector, rather than an AI sector, on the grounds that it is mostly a particular sub-group within the technology sector which is currently designing, developing and marketing AI systems, but multiple sectors of the economy are currently deploying AI technology, and many more will likely join them in the near future.
Categories of artificial intelligence
14.Artificial intelligence can be viewed as ‘general’ or ‘narrow’ in scope. Artificial general intelligence refers to a machine with broad cognitive abilities, which is able to think, or at least simulate convincingly, all of the intellectual capacities of a human being, and potentially surpass them—it would essentially be intellectually indistinguishable from a human being.
15.Narrow AI systems perform specific tasks which would require intelligence in a human being, and may even surpass human abilities in these areas. However, such systems are limited in the range of tasks they can perform.
16.In this report, when we refer to artificial intelligence we are referring to narrow AI systems unless explicitly stated otherwise. It is these systems which have seen so much progress in recent years, and which are likely to have the greatest impact on our lives. By contrast, there has been little to no progress in the development of artificial general intelligence.21
17.The terms ‘machine learning’ and ‘artificial intelligence’ are also sometimes conflated or confused, but machine learning is in fact a particular type of artificial intelligence which is especially dominant within the field today. We are aware that many computer scientists today prefer to use ‘machine learning’ given its greater precision and lesser tendency to evoke misleading public perceptions.
18.We have intentionally chosen to refer for the most part to artificial intelligence as a whole rather than machine learning. While this is partly because AI, for all its difficult baggage, is a far more familiar term to the public, it is mostly because we are aware that AI is a broad field. While machine learning is currently the most well-represented and successful branch, as the following historical overview illustrates, this has not always been the case, and may well not be the case in the future.
19.Many of the issues we will deal with are associated with the implications of machines which can simulate aspects of human intelligence. While in some cases the exact mechanisms by which they do this are of relevance, in many cases they are not, and we believe it important to retain a broad outlook on the societal impact of AI as a whole.
History
20.The field of artificial intelligence has been inextricably linked to the rise of digital computing, and many pioneers of the latter, such as Alan Turing and John McCarthy, were also closely involved with conceptualising and shaping the former. 1950 saw the publication of Turing’s seminal paper, Computing Machinery and Intelligence, which helped to formalise the concept of intelligent machines and embed them within the rapidly growing field of digital computing.22
21.Indeed, for better or worse, many of the concepts and terminology we still use to describe the field were bequeathed to us in this period. This included the concept of a ‘Turing Test’ to determine whether a machine has achieved ‘true’ artificial intelligence, and even the term ‘artificial intelligence’ itself. It was commonly thought at this time that the most promising way to achieve these ends was to mimic nature, which led to the first experiments with artificial ‘neural networks’ designed to very crudely approximate the networks of neurons in the human brain.
22.In the 1960s the field moved beyond the relatively small number of pioneers, mostly based in the United States and Britain, and the first major academic centres for AI research were established, at the Massachusetts Institute of Technology (MIT), Carnegie Mellon University, Stanford, and Edinburgh University. The period saw considerable enthusiasm for AI and its potential applications, with claims by some AI experts that the challenge of machine intelligence would be solved “within a generation”.23
23.By the 1970s these exuberant claims began to meet growing scepticism on both sides of the Atlantic. In the UK, discord within the AI research communities at Edinburgh and Sussex Universities prompted the Science Research Council to launch an inquiry, headed by Professor Sir James Lighthill, into the state of the field. The Lighthill Report of 1973, while supportive of AI research related to automation and computer simulations of psychological and neurological processes, was deeply critical of much basic research into AI. It was also doubtful that general-purpose AI would be achievable within the twentieth century, if at all.24 It is not in fact clear, as some have claimed, whether this led to a scaling back of research funding for AI, but scepticism towards and within the field certainly grew during this period, which is now referred to by many technologists as the first ‘AI winter’.25 While some AI researchers criticised the manner in which the inquiry was conducted, and especially the fact that Lighthill was not himself a specialist in AI, the United States followed a similar trajectory, independent of Lighthill’s scepticism.
24.Despite these setbacks, research into AI continued, and by the 1980s some of this was starting to produce commercially viable applications. Among the first of these were ‘expert systems’, which sought to record and programme into machines the rules and processes used by specialists in particular fields, and produce software which could automate some forms of expert decision making, such as measuring the correct dosages when prescribing antibiotics.26 By one contemporary estimate, at the end of the decade over half of Fortune 500 companies were involved in either developing or maintaining expert systems.27 As primarily rule-based systems, these were mostly quite different from the machine learning systems of today, with little to no capacity to ‘learn’ new functionality.
25.The 1980s also saw the UK Government renew its troubled relationship with AI, with the ambitious Alvey Programme launched in 1983. Envisaged as a response to major state-sponsored computing research and development (R&D) projects elsewhere in the world, in particular Japan’s Fifth Generation, it sought to bring together researchers from the Government, universities and industry to investigate a range of issues, including AI. Despite overall funding of £350 million (equivalent to over £1 billion today) over four years, with £200 million coming directly from the Government, it failed in its central objective of improving the competitiveness of UK information technology (IT) businesses. The director of the programme, Brian Oakley, later claimed that too much emphasis was placed on pure R&D, at the expense of the development required to build viable products.28
26.Disillusionment with the Alvey Programme coincided with a second global ‘AI winter’ at the end of the 1980s. Enthusiasm for expert systems waned as their limitations—high costs, requirements for frequent and time-consuming updates, and a tendency to become less useful and accurate as more rules were added—became apparent.29 The Defence Advanced Research Projects Agency (DARPA), one of the main sources of government funding in the USA, decreased AI funding by a third between 1987 and 1989, while investment from the private sector also decreased.30 The development of the internet and the World Wide Web also began diverting attention from AI R&D, offering as they did alternative models for organising, processing and disseminating information to that of AI systems.31
27.Even as the excitement and level of investment into expert systems and AI R&D more generally diminished, the late 1980s and 1990s saw AI applied to increasingly diverse functions, including predicting changes in the stock markets, data mining large corporate databases and developing visual processing systems like automatic number plate recognition cameras.32 Many of these new applications made less use of the rules and logic-based ‘symbolic AI’ approaches of previous decades. Instead, they deployed alternative machine learning approaches, which looked for statistical patterns and correlations in increasingly large datasets, paving the way for more recent developments in narrow AI systems.
28.Government support in the UK also continued, albeit in a drastically downscaled format, with the Department for Trade and Industry’s Neural Computing Technology Transfer Programme, which began in 1993 with a budget of £5.75 million, spread over six years. Intended to raise awareness of neural networks (now rebranded as ‘neural computing’) in business, the project encompassed an awareness campaign, and a demonstrator programme, which established seven clubs, managed and delivered by contracted consortia.33 The subsequent evaluation claimed that “after 18 months 3,500 companies who had participated in awareness events could name an application area within their company where neural networks could be applied and 1,000 had taken some action to introduce applications”.34 However, while the programme was thought to have provided “an important benefit in allowing companies to test neural networks in a low cost manner”, it failed to produce a legacy of subsidy-free investment in the technology.
29.The current wave of interest in AI was largely driven by developments in neural networks in the mid-2000s, when a team led by Geoffrey Hinton, a British researcher based at the University of Toronto, began to demonstrate the power of ‘deep learning’ neural networks. The team showed that these networks, which could automatically process unlabelled data, could be more effective at a wide range of tasks, such as image and speech recognition, than the more conventional algorithms then in use.
30.In the years since then, developments in AI in general, and deep learning in particular, have progressed rapidly. This is largely due to three factors: the growing availability of data with which to train AI systems, the continued growth in available computer processing power, and the development of more sophisticated algorithms and techniques. The widespread availability of cloud computing platforms, from Alibaba, Amazon and Microsoft in particular, has also helped by allowing clients to tap remotely into huge stores of computing power with relative ease, and without the need to maintain their own hardware. Finally, the growth of open source development platforms for AI — in particular Google’s TensorFlow, a library of components for machine learning — has reduced barriers to entry for researchers and commercial entities alike.
31.With this historic perspective in mind, there are three themes which we have considered throughout this report. Firstly, developments in the field of AI have been strongly characterised by boom and bust cycles, in which excitement and progress has been followed by disappointment and disillusionment. Otherwise known as the ‘AI winters’, researchers were unable to deliver on the full scale of their promises in a short enough time. While we believe, given the extent to which AI is now being used in actual commercial products and services today, that interest and investment is likely to persist this time round, we are also aware that the present excitement around AI is unlikely to be sustained indefinitely, and a note of caution would therefore be wise. Secondly, while AI research has been dominated by national government and universities since the 1950s, more recently this mantle has been passed to the private sector. While this has seen levels of investment increase to unprecedented levels, it also raises questions about the power and influence of the large tech companies—increasingly referred to as ‘Big Tech’—which are less fettered by the requirements for democratic accountability. We will return to this subject later in the report.35 Thirdly, although the United States and the UK were early pioneers in AI research, China has invested heavily in the field, and aims to eclipse the efforts of other nations in the coming decades.
Recent reports
32.Ours is not the only recent report focusing on the impact of artificial intelligence. The following reports are those we have sought to build upon, and have borne in mind throughout the course of our inquiry.
Robotics and artificial intelligence
33.The House of Commons Science and Technology Committee published this report on 12 October 2016. The report recommended a greater emphasis on developing digital skills for the workforce of the future, and concluded that although it was too soon to set down specific regulations for the use of artificial intelligence, a standing commission on artificial intelligence should be created to examine its implications and establish principles to govern its development and applications.36
Machine learning: the power and promise of computers that learn by example
34.This report, published on 25 April 2017 by the Royal Society, focused on machine learning and was overseen by a working group drawn from academia and industry. The report outlined the major opportunities and challenges associated with the current trends in machine learning.37 The report made a number of specific recommendations, suggesting that the Government should do more to promote open data standards, improve education and training in machine learning methods at all education levels, ensure that immigration and industrial strategy policies align with the needs of the UK AI development sector, and facilitate public dialogues on the opportunities and challenges of machine learning. Overall, the report argued in favour of specific sectoral approaches to regulating AI, rather than a more overarching, cross-sector approach.
Data Management and Use: Governance in the 21st century
35.The British Academy and Royal Society published this joint report on 28 June 2017. The report focused on the management of data, and concluded that while existing frameworks provided much of what is sufficient for today there is a need to develop a new framework to cope with future challenges. The report covered all aspects of data management and its use. This has been relevant to our work because of the importance of the management and use of data which feeds, and is generated by, artificial intelligence. The report recommended the establishment of a new body to steward the data governance landscape as a whole.
Growing the artificial intelligence industry in the UK
36.Professor Dame Wendy Hall, Regius Professor of Computer Science at the University of Southampton, and Dr Jérôme Pesenti, then CEO of BenevolentAI, chaired this review, staffed by civil servants, as part of the Government’s Digital Strategy. The review, announced in March 2017, published its report on 15 October 2017. The Hall-Pesenti Review made 18 recommendations on how to make the UK the best place in the world for businesses developing AI. These recommendations focused on skills, increasing adoption of AI, ensuring data is used properly and securely, and building the UK’s AI research capacity.
Industrial Strategy: Building a Britain fit for the future
37.This White Paper, published on 27 November 2017, set out the Government’s long-term plan to boost productivity in the UK and strengthen the economy. The Strategy outlined four “Grand Challenges” for the UK, one of which was to put the country at the forefront of the artificial intelligence and data revolution. The Strategy also served as a response to the Hall-Pesenti Review. As such, the Strategy detailed a number of policies related to AI, and announced the establishment of a range of new institutions, which we discuss later in this report.
Impact on politics
38.Artificial intelligence will change the way we all relate to the world around us. The questions AI raises challenge existing ideological questions which have defined politics in the UK. The economy and society are changing, and all parties must stand ready to embrace and direct that change. As a cross-party committee, we recognise that in order for the UK to meet its potential and lead the way in shaping the future for society, AI policy must be committed to for the long-term, agreed by consensus and informed by views on all sides. It is in this spirit that we have made our report.
17 John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon, A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence (31 August 1955), p 1: http://raysolomonoff.com/dartmouth/boxa/dart564props.pdf [accessed 5 February 2018]
19 This is perhaps unsurprising given the absence of any widely-accepted definition of organic intelligence, according to which AI is normally compared.
20 Department for Business, Energy and Industrial Strategy, Industrial Strategy: Building a Britain fit for the future (November 2017), p 37: https://www.gov.uk/government/uploads/system/uploads/attachment_data/file/664563/industrial-strategy-white-paper-web-ready-version.pdf [accessed 20 March 2018]
22 A.M. Turing, ‘Computing Machinery and Intelligence’ in Mind, vol. 59, (1 October 1950), pp 433–460: https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/t_article.pdf [accessed 5 February 2018]
23 This particular claim came from Marvin Minsky, an early pioneer of AI and noted sceptic of neural networks. However, Luke Muelhlhauser of the Open Philanthropy Project has argued convincingly that some historical accounts of this period have over-exaggerated or misinterpreted the hyperbole of this period, and many computer scientists made far more moderate predictions. Luke Muehlhauser, ‘What should we learn from past AI forecasts?’, Open Philanthropy Project (September 2016): https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts [accessed 5 February 2018]
24 Science Research Council, Artificial Intelligence: A paper symposium (1973): http://www.chilton-computing.org.uk/inf/literature/reports/lighthill_report/contents.htm [accessed 5 February 2018]
25 As Muehlhauser has argued, a few AI researchers even remembered to this period as a “boomtime” for AI. Luke Muehlhauser, ‘What should we learn from past AI forecasts?’, Open Philanthropy Project (September 2016): https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/what-should-we-learn-past-ai-forecasts [accessed 5 February 2018]
26 Luke Dormehl, Thinking Machines: The Quest for Artificial Intelligence—and where it’s taking us next, 1st edition (New York: TarcherPerigee, an imprint of Penguin Random House LLC, 2017), p 30
27 Beth Enslow, ‘The Payoff from Expert Systems’, Across the Board (January/February 1989), p 56: https://stacks.stanford.edu/file/druid:sb599zp1950/sb599zp1950.pdf [accessed 1 March 2018]
28 Angeli Mehta, ‘Ailing after Alvey: The Alvey programme was Britain’s big chance to compete in information technology - Brian Oakley, a former director of the Alvey, reflects on what went wrong’, New Scientist (7 July 1990): https://www.newscientist.com/article/mg12717242–300-ailing-after-alvey-the-alvey-programme-was-britains-big-chance-to-compete-in-information-technology-brian-oakley-a-former-director-of-alvey-reflects-on-what-went-wrong/ [accessed 31 January 2018]
29 In short, the large quantities of tacit knowledge which many professions and experts relied on to do their jobs could overwhelm these rule-based systems. Thinking Machines: The Quest for Artificial Intelligence—and where it’s taking us next, p 32
30 Thinking Machines: The Quest for Artificial Intelligence—and where it’s taking us next, p 33. This waning enthusiasm was not universal though – the European ESPIRIT project, and Japan’s Fifth Generation project, continued well into the 1990s.
31 Richard Susskind and Daniel Susskind, The Future of the Professions: How technology will transform the work of human experts (Oxford: Oxford University Press, 2015)
32 Pedro Domingos, The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World (New York: Basic Books, 2015), p 21
33 The National Archives, ‘Neural Computing Programme’: http://webarchive.nationalarchives.gov.uk/20040117043522/http://www2.dti.gov.uk/iese/aurep38b.html [accessed 31 January 2018]
34 Ibid.
35 See Chapter 3.
36 Science and Technology Committee, Robotics and artificial intelligence (Fifth Report, Session 2016–17, HC 145)
37 Royal Society, Machine learning: the power and promise of computers that learn by example (April 2017): https://royalsociety.org/~/media/policy/projects/machine-learning/publications/machine-learning-report.pdf [accessed 5 February 2018]