Securing the building blocks of embedded software
Dr. Dimitrios Glynos (PhD), Director of Product Security Services at CENSUS, highlights recent developments in securing the building blocks of embedded systems. The use of embedded systems is growing rapidly, with the “Internet of Things” family of devices becoming present in almost every part of everyday life. However, the connected and critical nature of some of these devices has made them an attractive target for attackers. Improving the cybersecurity aspects of embedded devices and their deployment is a non-trivial task that requires measures across the product lifecycle and supply chain.
Securing the building blocks of embedded software
Embedded systems are special-purpose systems that cover a wide range of applications, from home electronics and industrial control systems to medical devices and avionics. The remote management & telemetry features of the so-called "Internet of Things" family of embedded devices have made them quite popular and their placement is almost ubiquitous. From a security standpoint, embedded software is not that different from software found in other domains. However, the criticality of its operation, its exposure on public networks, but also its security limitations make it a very attractive target for attackers.
The embedded software stack
Embedded systems may execute applications over an Operating System (OS) or as part of “bare metal” firmware. Through Operating Systems like Android, it is possible for vendors to ship embedded solutions faster. Contrarily, in “bare metal” firmware there is no clear distinction between operating system code that drives the hardware and application code. Vendors in these cases develop special-purpose software to both drive the hardware and fulfill the desired functionality.
The code placed in embedded firmware can be divided into two groups: code maintained by the vendor and code maintained by third parties. In many cases, embedded developers build upon a Software Development Kit (SDK) provided by a chip vendor to control chip functionalities. It is also very common for developers to reuse ready-made code to fulfill standard tasks.
All software dependencies mentioned up to this point may end up contributing code to the embedded firmware. In the same way, these software projects may also introduce cybersecurity issues to the embedded system.
While fixing embedded design and hardware component issues would require for the re-design and replacement of a product, software issues could in theory be addressed more easily through firmware updates. However, many vendors do not provide this capability, or the deployment of the embedded system itself might make this a difficult task (e.g. industrial control systems). To make things worse, there are firmware update mechanisms that are vulnerable due to software implementation errors or operational errors (as was the case in the recent “SolarWinds” attack).
Regulatory actions attempt to fill this space, with the EU requesting vendors to explicitly state the period for which cybersecurity fixes will be available in products, while the UK is also on the way towards a similar initiative. Implementation errors are best avoided through the use of standardized firmware update mechanisms that have undergone security audits, while operational errors call for more involved processes across the supply chain. On a proactive level, experts agree that network filtering and segmentation would severely limit the effects of a compromised device.
The use of common default credentials on embedded devices has been plaguing Internet infrastructure for decades. Devices bearing user (or service) accounts with guessable credentials are easily taken over by attackers, and are used as mechanisms for malware infection, pivot points for further attacks, or as members of botnets. For these and other default configuration problems, it is advised to follow strong default security and privacy measures at the design level, protecting devices and users even when products are configured with factory settings.
Firmware code may be developed using low-level programming languages to target better performance. Such implementations are often vulnerable to memory corruption bugs allowing the attacker to influence the device's software or data at runtime.
With access to the source code of custom developed components, vendors enjoy a wide array of methods for identifying vulnerabilities, such as code auditing, static analysis, or even the use of semantic search tools.
It is equally important, though, to look for proactive ways to avoid such patterns of vulnerabilities. Recent developments in programming languages have yielded system programming languages such as Rust, where the compiler provides specific guarantees on the safety properties of the produced software.
Third-party components of embedded systems are often perceived as trusted "black boxes”. Sometimes, these components may be performing security-sensitive work such as secure boot, or the cryptographic storage of secrets. Other times, their exploitation potential might not be obvious to the vendor but might have serious repercussions to the device.
To minimize the relevant risk, some vendors employ a zero-trust architecture in their products. A third-party component is considered as "non-trusted" at the design stage and its operation is isolated from the rest of the device.
Vendors that wish to manage the risk imposed by vulnerabilities in third-party components are faced with an elaborate identification effort: many third-party components are bundled in such a way that it is non-trivial to identify their primary "ingredients". It is customary for the suppliers of such components to provide versions of these, bundled with other third-party components which may have vulnerabilities of their own.
This is where "SBOM" ("Software Bill of Materials") initiatives come into play. The SBOM is a list of primary components that make up a piece of software (source, component name, version etc.). With the SBOM registry at hand, a vendor (and a user) may independently track the state of known vulnerabilities in components integrated into a product and take appropriate actions when a vulnerable component is identified.
The experimental results from security assessments and research conducted by CENSUS show that modern embedded systems build upon a complex stack, components of which might have gone into products without first having passed a security audit.
In 2019, CENSUS identified a series of memory allocation issues in a very popular C library, called newlib, used in embedded systems. The identified issues also affected all downstream libraries and SDKs that were based on newlib. These issues could cause the installation of malicious code on affected devices.
Another example would be two memory corruption issues identified in an SDK used with cryptographic co-processors. CENSUS discovered that it was possible for someone with access to the vulnerable device to execute arbitrary code on the main microcontroller, if the device used the key or signature generation features of a particular chip vendor’s cryptographic co-processors.
For more information on this topic, have a look at our extended blog post.