Investment in the UK’s tech sector is booming despite the cloud of Brexit uncertainty, with Britain leading the way among European countries. UK tech firms attracted £3 billion in 2018, more than double that invested in 2017. Boosting the trend further, the government has announced a series of funding strategies for Artificial Intelligence (AI) and future tech. The government certainly feels that it has the policy ideas to help boost the sector further, but will these policies help or hinder private equity investment in tech?

 

£3 billion investment in 2018

 

2018 has been a big year for government policy on AI in particular, with a new Office for AI set up, a new “Sector Deal” between government and AI stakeholders, and the launch of the Centre for Data Ethics and Innovation. These measures have committed the government to working with the tech and business sectors to help AI develop, meaning portfolio assets will have the opportunity to influence the direction of AI policy. Greg Clark MP, Secretary of State for Business, Energy and Industrial Strategy, has previously said that government wants to “help our world-leading businesses exploit the potential of AI, encourage companies to engage and grasp the opportunities ahead.” This all shows a willingness from the government to put the UK at the forefront of AI development and to invest in its growth. The issue with this policy is not one of enthusiasm from the government but ensuring that the tricky balance between the commercial opportunities for business – that it wants to help deliver – and the ethical questions about the use of AI are addressed.

 

Regulation that enables innovation

 

Currently, the Office for AI is run jointly by the Department for Digital, Culture, Media, and Sport (DCMS) and the Department for Business, Energy, and Industrial Strategy (BEIS). Tellingly, when the House of Lords Artificial Intelligence Select Committee published a report on AI in April 2018, the government response came only from BEIS. On this basis, BEIS is likely to be the department in the driving seat on AI policymaking, meaning funding and policy priorities could be influenced by business, and aligned with their commercial priorities.

 

The recent cabinet reshuffles are also likely to have an effect. While all the government’s new policies on AI were being established, Matt Hancock MP, a man infamous for his love of tech and innovation, was Secretary of State for Digital, Culture, Media, and Sport. In July he was replaced by Jeremy Wright MP, who has demonstrated decidedly less enthusiasm for the subject. Across his parliamentary career, Wright has largely sought to steer clear of all things technology related, only intervening as Attorney General to remind social media companies they were not above the law and to say that international law must keep up with the rapid rate of technological development or risk cyberspace becoming “lawless.” While this is unlikely to alter the direction of government policy, it may temper ambitions within DCMS, leaving BEIS with the bulk of the de facto responsibility for AI. This gives rise to the potential for the governments’ AI policy to become focused on its business potential, rather than technical innovation, alienating developers. The respective remits of DCMS and BEIS theoretically mean that DCMS will focus on supporting innovation during the current development stage of AI, while BEIS focuses on future business applications. In the absence of attention from DCMS, the government risks becoming too focused on the future without doing enough to help the industry grow in the present. Measures like the Sector Deal will help businesses maintain lines of communication that may alleviate this issue, but leadership within DCMS is unlikely to have been as enthusiastic as they once were.  

 

“Sensationalist” or “clear and present danger”

 

Underneath all the investment announcements and sector deals, there is a concern that there will be a backlash from some on AI. Greg Clark MP has spoken before about the ‘sensationalist’ way AI is portrayed in the media and has suggested that the government needs to take the lead in marketing AI to the public. They are likely to have a difficult job on their hands, with a 2017 report from PwC estimating that up to 30 per cent of the UK’s jobs could be under threat from AI, a figure which won’t go down well with workers in the diverse array of sectors likely to be affected. The government must perform a balancing act – supporting AI growth without adding to the perception that workers will be left without a job as a result.

 

There have been accusations from Labour and the academic community that the government has failed to tackle the ethical consequences of AI. Shadow Culture Secretary Tom Watson has argued that the government must do more to protect those in jobs that could be replaced by AI, while a group of 26 academic and research institutions described AI as causing a “clear and present danger” to society if unregulated.  It is highly unlikely that the government will seek to introduce heavy-handed regulations while AI is still in the development phase, and high funding levels will likely continue in the short term. However, investors should be conscious that regulation of the sector is largely inevitable once AI reaches the consumer market, whether that is self-driving vehicles or new advertising algorithms. In an era of cyber-attacks, fake news and Big Data, the government will have to be prepared to mitigate the risks if it wants to reap the benefits of AI.

 

Investors will have to be conscious of the mood in government going forward as this will remain an evolving and politically sensitive issue. The government has indicated that it would seek to introduce regulations for AI on a sector by sector basis while regulators have been encouraged to adopt an approach that both protects the public and “enables innovation.” How they approach this balance, and whether it is possible, will be crucial for the development of AI and investment in the sector.