The Need for Trustable Software
Trust is the basis upon which democracy, modern economics and societal stability have been built. Underpinning public and market confidence, trust in our political, legal and financial frameworks generates willingness to delegate control, be governed, accept taxation, invest, partner and respect ownership. Through minimising the pricing of risk and arbitrage into transactions, trust has enabled efficient markets, confidence in banking and the economic expansion of civilisation. The emergence of cryptocurrencies and concepts such as the Internet of Agreements are based on distributed ledgers as next generation systems of trust.
All critical products and services upon which human health, safety and security depend, have, of necessity, evolved recognisable processes to provide transparency and allow assessment of the degree to which to which that product or service is capable of being trusted. We will refer to these as trustable processes because they generate the ability to trust. These vary from industry to industry, but generally take the form of laws, regulations, standards and audit practices. They provide confidence that a pill may be swallowed, that a bridge may be crossed, that fire safety has been adequately provided for, that is safe to board an aircraft - conversely, that the risk of using a product or service is worth accepting.
However, there exists an important exception – software. In an age of increasing reliance upon software and ever more complex, interconnected and interdependent systems, we must address the question: to what extent can we trust this software?
Unlike physical construction, software does not have to conform to a set of building standards; unlike the pharmaceutical industry, there are no notified bodies or regulators; unlike the legal profession there is not a single body upholding standards of practice, and unlike accounting, software is unaudited. There is currently no recognisable process, regulatory framework, set of standards or audit trail by which, at any stage, it is possible to assess the degree to which software is capable of being trusted.
Five key unknowns lie at the heart of risks posed by this lack of transparency:
Where does the code come from and who wrote it?
Does the code do what it is supposed to do and does it not do what it is not supposed to do?
How was the code built and tested prior to deployment?
Can we reproduce it exactly as it was originally generated?
Can we maintain it without breaking it?
Shifting from ad hoc to systemic trust
Until now an ad hoc approach to trust in software has been tolerable. However as virtually every aspect of human life evolves to be dependent on software of ever increasing capability and complexity, a systematic approach is needed. It is hard to overstate the degree to which software plays a pivotal role in the critical infrastructure and vital functioning of modern human society.
The operation of our homes, workplaces, government, education system, food and energy production, communications, logistics, healthcare and financial systems are increasingly reliant upon software. The loss or denial of service for any reason, accidental or deliberate, has potential consequences that range from mere inconvenience and reputational damage, to financial loss and ultimately loss of life.
In engineering, software has become ubiquitous and inseparable from the mechanical systems it supports. The advent of the driverless car will finalise an ongoing paradigm inversion, whereby a vehicle that is currently considered primarily as a mechanical object supported by software will become viewed as primarily software encapsulated within mechanical components. Concerns of safety and security will shift from trust in the mechanical components, such as, whether the brakes work, to trust in the software, for example, can a cyber attacker take control of the vehicle, or the consequences of software failure at high speed.
Operating in an environment with software supplied ‘as safe as possible’, as it currently is, but without an auditable process for verifying the provenance and testing of that code, is no longer appropriate. Without adopting a process by which the trustability of software can be determined, society will increasingly stumble from one problem to the next. Whether this is experienced as failure in use, increased cyberattacks, or financial loss, the result will inevitably lead to an erosion of public confidence with repercussions for governments and regulators.
A Critical Issue To Address Now
In the wake of the global financial crisis of 2007-2008, it became clear that the crisis was avoidable and was caused by widespread failures in regulation and supervision, poor management of accumulated systemic risk, lack of transparency, breakdown in accountability and ethics and failures to correctly price risk.
Analogously, despite the urgent unmet need, the software industry is inexorably drawn towards fuelling growth and will de facto ignore and resist this “push” towards a systematic approach to trust in software. An equivalent “pull” is required by governments and regulators in recognising the problem and encouraging the adoption of trustablility as standard practice, before a series of events or a particular disaster forces this issue into the wider public domain and Government is required to compel industry post hoc to address the issue of trust in software.
By 2020, at least 20 billion devices will be connected to the Internet, each more complex, interconnected and interdependent than ever before. Ignoring the systemic risks, lack of transparency, breakdown in accountability and failure of regulatory supervision holds the potential to accumulate a crisis as potent as any previously experienced.
Trustability: An Established Key to Trust
A trustable process can be defined as “auditable in such a way that, at any point in the process, one can assess the degree to which it can be trusted”. Although this term may be unfamiliar in everyday language, examples in use are immediately recognisable and underpin the existence of industries such as construction, financial services, healthcare, aerospace, nuclear power and public transportation, where safety and security are paramount, and the consequences of failure are substantial.
Financial auditing is an established process that evolved over centuries in response to the need for trust in finance. The handling of evidence in the criminal justice system2 also follows a strict process so that a jury can have confidence in the provenance of evidence and that has not been tampered with.
The requirements and steps of these trustable processes may at first glance appear to have little in common. However, all such processes share a set of features that enable trustability: those providing a product, service or information are required to present detailed evidence on the provenance, manufacture, testing and validity of what is being supplied. The evidence required, its format, the standards for preparation and storage are specified by a regulator or agency, and it is then made available to a nominated body to inspect and audit to certify its accuracy.
Applying Trustability to Software
Today, software purchasing and use relies largely on a combination of reputational and experiential trust. Purchasers largely rely on brands, recommendations and experience in use. A reputable brand has value because it implies that others were satisfied. Reputational trust is achieved through recommendation or the collective opinion of others. Experiential trust derives from successful use of a product or service to the point where it is considered trustworthy, regardless of other evidence.
While issues with trust in software have long been recognised, the default approach of the industry has been to focus on improving the quality and trustworthiness through better code, new programming languages, greater attention to bugs, and more frequent and improved security patches. Though logical for providers, this is an subjective and non-systemic response.
Various established approaches have attempted to create arbitrary standards for trusted and trustworthy software, but these are application-specific and apply to systems in a particular state of delivery. A holistic solution is required, which provides for a far higher standard of evidence of the whole process by which a system is built, operated and maintained to stated requirements and standards agreed at the start of the project, and adapted as the requirements change.
Supporting Learning and Resilience
Trustable processes are not infallible, rather their efficacy derives from strongly incentivising participants in the value chain to act professionally and responsibly – or suffer sanction. It is the combination of the need to provide proof that the process is being followed, the provision of key data to interested and independent parties and the subsequent auditing of that data that encourages integrity, and, through compliance, evolves trust towards the standards required for society’s needs.
As risks can never be totally eliminated, a trustable process needs to maintain confidence, even when the process has not delivered the desired result, by providing resilience to failure. Resilience, in the case of a trustable process, is therefore not just about a government, regulator, company or other body responding to an individual event or disaster but the ability for the system to respond to that event systematically. If there is a disaster, the trustable process maintains that trust by providing the relevant authorities with data and documentation to help investigate the root cause. Once the root cause is known, action can be taken to eliminate or reduce the same potential risk in similar conditions. Trustable processes give people confidence in a product or service even in the aftermath of a disaster.
Towards Trustable Software
The proposed approach of trustable software described here adds transparency to the design, development and testing process for software code, and generates and collects together assurances on each piece of software. Snapshots of software at key points in its development are accompanied by a linked immutable audit log containing key information about the process by which the software has been produced, installed and maintained.
A downstream party relies on the producing party to capture the evidence correctly, using chained hashes or “blockchain”, to ensure that it is computationally unfeasible to alter the code-log relationship. This metadata can be made available to any third party, improving transparency and enabling downstream customers to assess the degree to which software can be trusted.
In turn this places pressure on all of the entities in the design, development and deployment chain to act according to standards and leads to trust in exactly the same way that it does in construction, pharmaceuticals and financial reporting.
How a trustable software process would work in practice needs to be explored and discussed further with a few to generating a reference implementation. The generic trustable software process that we present in this paper is a first step in this direction.
We invite comment and feedback from all stakeholder parties with a view towards a robust debate on the role that trustable may play.
Please visit our website at to include your comments and to contact us.