top of page

Lord Reid calls for a Trustable Process for AI Software

On 19 November 2018, founder of ISRS, the Rt Hon Lord Reid of Cardowan addressed the House of Lords debate on the Select Committee on Artificial Intelligence’s report ‘AI in the UK: ready, willing and able?’

LONDON — 9 October, 2018 — The Institute for Strategy, Resilience & Security (ISRS) at University College London (UCL). ISRS founder Lord Reid of Cardowan spoke in the House of Lords debate on the Select Committee on Artificial Intelligence’s report ‘AI in the UK: ready, willing and able?’. In his speech, Lord Reid addressed the area of ethics and responsibility arising from Artificial Intelligence (AI) and the need to mark a watershed in how we think about and treat software. While AI entities may emulate humans, their underlying logic remains a function of their architecture. The speech highlighted complexities of creating AI systems whose functioning is largely opaque and whose outputs are non-deterministic; that is, what they do under all circumstances cannot be predicted with certainty.

Lord Reid identified potential risks posed by AI systems, which have the ability to appear human-like, to conduct conversations, and even to recognise emotions, allowing organisations to project human-like responsibility from what are actually software agents. Bias and specification gaming are two important emergent properties of machine learning systems and they may learn to act in ways that we consider biased, unethical or even criminal. Software itself cannot be held legally responsible for its actions. He highlighted that the software industry today operates in a very different way to others that are critical to modern society, where audit processes encourage professional responsibility for the consequences of actions. Most software today is sold with an explicit disclaimer of fitness for purpose and it is virtually impossible to answer basic questions e.g. by whom, against what specification, why and when was this code generated, tested or deployed and who in an organisation is responsible for the actions of that software in the event of a problem.

Lord Reid proposed the concept of a clear chain of responsibility, linking an audit of the specifications, code, testing and function to responsible individuals, as “trustable software” and concluded that AI needs a responsible human “parent and a “trustable” process to introduce auditability, accountability and, ultimately, responsibility.The concepts introduced in his speech are discussed in more detail in our white paper “Towards Trustable Software – A Systematic Approach To Establishing Trust In Software” produced by The Institute for Strategy, Resilience & Security (ISRS) at University College London (UCL) in association with Codethink Ltd.

The speech can be viewed here.

###

ABOUT THE INSTITUTE FOR STRATEGY, RESILIENCE & SECURITY (ISRS) AT UCL

The Institute for Strategy Resilience & Security (ISRS) (www.isrs.org.uk) at UCL serves as a pioneer and forum for next generation thinking. Founded by the Rt Hon. Lord Reid of Cardowan, ISRS provides analysis and assessment of the major issues of resilience with respect to national and global infrastructure and the ability of governments, regulators and businesses to respond to them. The Institute advises industry and the public sector on the persistent challenges to their agility, stamina and capacity for strategic decision making, so as to better face existential threats and disruptive innovation that are not addressed by conventional strategy and forecasting.

CONTACT INFORMATION

Institute for Strategy, Resilience & Security (ISRS)

University College London

Gower Street

London

WC1E 6BT

E-mail: info@isrs.org.uk

bottom of page