A Hippocratic Oath for Software Engineers?
Imagine every time we switched on a kettle to make a cup of tea, we had to ask ourselves whether something inside it was unsafe. Imagine if it was our responsibility to check the kettle was still safe, and if not, to source and replace the unsafe component. Imagine if every time we plugged in the kettle, we had to question whether it was safe to use, because we couldn’t be sure that the electrician who wired up the sockets had done so safely. Of course, we don’t think this way when we switch on our kettle because the electrician who wired up our house was qualified and trained to do so in line with regulations. Likewise, the kettle, its plug and its lead all have certain standards to meet to be sold to consumers for home use. These standards and regulations have been a necessity to make electricity the utility it is today.
But this is the situation we find ourselves in today with software-enabled technology: can it be acceptable that technology which the majority of people might think of as utilities: their laptops, tablets and smartphones, for example, are the consumer’s responsibility to secure?
If there ever was a golden era when computer users didn’t have to think about security, it didn’t last very long. In his book “The Cuckoo’s Egg” Clifford Stoll explains how in the early days of networked computers, scientists and academics – who maybe imagined they were in just such a golden era - were the victims of a sophisticated foreign intelligence operation, leading to the establishment of the first international bodies to co-ordinate cyber incident response (CERT/CC and eventually FIRST.)
Software is increasingly built into everything we use, and the drive to make everything “smart” has led to an explosive growth in demand for people with software development skills. That’s led to remarkable innovation and disruption but what hasn’t kept up are attempts to professionalise software engineering. As we rush headlong into the era of the Internet of Things (IoT), where seemingly everything will be connected, it becomes imperative that the people writing the software which goes into those devices develop it to be secure by design.
This means that the tools that software engineers use need to make it easy for them to write secure code and difficult to write insecure code. It means that software engineers need to be taught how to develop systems and software securely. It means that companies need incentives to build systems which are designed to be secure and penalties for developing systems which are insecure. In the era of artificial intelligence and autonomous systems, is there an argument that software engineers should take some kind of Hippocratic oath? Can we imagine in the non-too-distant future, engineers being struck-off for their role in building systems which caused harm? Can regulations and standards be developed which hold companies accountable for the security of their products without stifling innovation?
In the near future, where everything is connected, connected, smart devices will no longer be thought of as new or cool, but simply utilities. In order to be utilities, the consumer shouldn’t have to concern themselves with changing, remembering, or even using passwords; or patching, securing or manually updating their devices but simply enjoying the fact that they can safely and securely use them in the knowledge that they have been built by professionals.
You can read all our other guest blogs throughout the week here.