Updating the Future

Later today I’ll be speaking at B-Sides London about software updates and how they are probably the only effective mechanism that can defend users against the malicious use of discovered, exploitable vulnerabilities. Despite that, we still have a long way to go and the rush towards everything being connected could leave users more exposed than they are now.

The recent “effective power” SMS bug in iOS really showed that even with a relatively minor user interface bug, there can be widespread disruption caused and in that case mainly because people thought it would be funny to send it to their friends.

The state of mobile phone updates

In vertical supply chains that are generally wholly owned by the vendor (as in the Apple case), it is relatively straightforward to deploy fixes to users. The device’s security architecture supports all the mechanisms to authenticate itself correctly, pick up a secure update and unpack it, verify and deliver it to the user. The internal processes for software testing and approval are streamlined and consistent so users can get updates quickly. This is not the case for other operating systems. Android users have a very complicated supply chain to deal with unless they have a Google supplied device. Mobile network interoperability issues can also cause problems, so network operators have to drive test every device and approve the updates that come through. Security updates are often bundled with other system updates, meaning that critical security issues can stay open because users just don’t get them fixed for months on end.

That’s if they get an update at all. Some manufacturers have a very chequered history when it comes to supporting devices after they’ve left the factory. If users are not updated and they’re continually exposed to serious internet security flaws such as those experienced with SSL, who is responsible? At the moment it seems nobody is. There is no regulation that says that users must be updated. There seems to be a shift in the mobile industry towards longer software support lifecycles – Microsoft has committed to 36 months support and Google at least 18 months, but there is still a long way to go in terms of ensuring that patch teams at manufacturers remain available to fix security issues and ensuring that an ‘adequate’ end-of-life for products is achieved and communicated properly to users.

The internet of abandoned devices

A lot of IoT devices have no ability to be updated, let alone securely. The foundations are simply not there. There is no secure boot ROM, a secure anchor of trust from which to start from, there is no secure booting mechanism to carefully build up trust as the device starts and web update mechanisms are often not even secured using SSL. Software builds are often as not unencrypted and certainly not digitally signed.

So with this starting point for our future, it appears that many of the hard lessons of the mobile phone world have not seen transference to the IoT world. Even then, we have a lot of future challenges. Many IoT devices or elements of the automotive space are ‘headless’ – they have no user display or interface, so the user themselves has no inkling of what is going on, good or bad. What is often termed “cyber-physical” can rapidly become real issues for people. A problem with an update to a connected health device can really harm a lot of people. Shortly before Google’s acquisition of Nest, a user had tweeted complaining that his pipes had burst. Understanding that certain services cannot just be turned off to allow for an update is key to engineering in this space.

Many of the devices that are planned to be deployed are severely constrained. Updating a device with memory and battery limitations is going to be possible only in limited circumstances. Many of these devices are going to be physically inaccessible too, but still need to be trusted. It’s not simply a question of replacement of obsolete devices – digging a vibration sensor out of the concrete of a bridge is going to be pretty cumbersome. Some of this space will require systems architecture re-thinking and mechanisms to be able to live with the risk. It may be that is simply impossible to have end-to-end security that can be trusted for any real length of time. As engineers if we start from the point that we can’t trust anything that has been deployed in the field and that some of it can’t be updated at all, we might avoid some serious future issues.