A Code of Practice for Security in Consumer IoT Products and Services

 

Today is a good day. The UK government has launched its Secure by Design report and it marks a major step forward for the UK for Internet of Things (IoT) security.

Embedded within the report is a draft “Code of Practice for Security in Consumer IoT Products and Associated Services”, which I authored in collaboration with DCMS and with input and feedback from various parties including the ICO and the NCSC.

I have been a passionate advocate of strong product security since I worked at Panasonic and established the product security function in their mobile phone division, through to the mobile recommendations body OMTP where, as the mobile industry we established the basis of hardware security and trust for future devices. We’re certainly winning in the mobile space – devices are significantly harder to breach, despite being under constant attack. This isn’t because of one single thing; it is multiple aspects of security built on the experiences of previous platforms and products. As technologies have matured, we’ve been able to implement things like software updates more easily and to establish what good looks like. Other aspects such as learning how to interact with security researchers or the best architectures for separating computing processes have also been learned over time.

Carrying over product security fundamentals into IoT

This isn’t the case however for IoT products and services. It feels in some cases like we’re stepping back 20 years. Frustratingly for those of us who’ve been through the painful years, the solutions already exist in the mobile device world for many of the problems seen in modern, hacked IoT devices. They just haven’t been implemented in IoT. This also applies to the surrounding ecosystem of applications and services for IoT. Time and again, we’re seeing developer mistakes such as a lack of certificate validation in mobile applications for IoT, which are entirely avoidable.

There is nothing truly ground-breaking within the Code of Practice. It isn’t difficult to implement many of the measures, but what we’re saying is that enough is enough. It is time to start putting houses in order, because we just can’t tolerate bad practice any more. For too long, vendors have been shipping products which are fundamentally insecure because no attention has been paid to security design. We have a choice. We can either have a lowest common denominator approach to security or we can say “this is the bar and you must at least have these basics in place”. In 2018 it just simply isn’t acceptable to have things like default passwords and open ports. This is how stuff like Mirai happens. The guidance addresses those issues and had it been in place, the huge impact of Mirai would simply not have occurred. Now is the time to act before the situation gets worse and people get physically hurt. The prioritisation of the guidance was something we discussed at length. The top three of elimination of the practice of default passwords, providing security researchers with a way to disclose vulnerabilities and keeping software updated were based on the fact that addressing these elements in particular, as a priority, will have a huge beneficial impact on overall cyber security, creating a much more secure environment for consumers.

We’re not alone in saying this. Multiple governments and organisations around the world are concerned about IoT security and are publishing security recommendations to help. This includes the US’s NIST, Europe’s ENISA and organisations such as the GSMA and the IoT Security Foundation. I maintain a living list of IoT security guidance from around the world on this blog.

So in order to make things more secure and ultimately safer (because a lot of IoT is already potentially life-impacting), it’s time to step things up and get better. Many parts of the IoT supply chain are already doing a huge amount on security and for those organisations, they’re likely already meeting the guidance in the code of practice, but it is evident that a large number of products are failing even on the basics.

Insecurity Canaries

Measuring security is always difficult. This is why we decided to create an outcomes-based approach. What we want is the ability for retailers and other parts of the supply chain to be easily able to identify what bad looks like. For some of the basic things like eliminating default passwords or setting up ways for security researchers to contact in the case of vulnerabilities, these can probably be seen as insecurity canaries – if the basics aren’t in place, what about the more complex elements that are more difficult to see or to inspect?

Another reason to focus on outcomes was that we were very keen to avoid stifling creativity when it came to security solutions, so we’ve avoided being prescriptive other than to describe best practice approaches or where bad practices need to be eliminated.

The Future

I am looking forward to developing the work further based on the feedback from the informal consultation on the Code of Practice. I support the various standards and recommendations mapping exercises going on which will fundamentally make compliance a lot easier for companies around the world. I am proud to have worked with such a forward-thinking team on this project and look forward to contributing further in the future.

Additional Resources

I’ve also written about how the Code of Practice would have prevented major attacks on IoT:

Need to know where to go to find out about IoT security recommendations and standards?

Here’s a couple more things I’ve written on the subject of IoT security:

Shiny Expensive Things: The Global Problem of Mobile Phone Theft

I was kindly invited down to Bournemouth University the other day by Shamal Faily, to give a talk as part of their Cyber Seminar series. I decided to talk about a quite hot topic which I’m very familiar with, mobile phone theft. The slides are updated from an earlier talk, but cover some of the political involvement in 2012/13 and some information on recent industry action and what should happen next.

Chrome app security model is broken

I’m worried. I’m worried for a lot of users who’ve installed Chrome Apps. I was idly browsing the Apps in the Chrome web store the other day and came across the popular Super Mario 2 app on the front page (over 14k users). I have to admit, I actually installed the app (extension) myself, so let me explain the user (and security) experience.

I saw the big splash screen for the flash game and thought I’d give it a try. There is a big install button (see picture). Installation is pretty instantaneous. As I looked at the screen, I saw the box to the bottom right. “This extension can access: Your data on all websites, Your bookmarks, Your browsing history”. I think I can legitimately give my mental response as “WTF!?! This is a game! What does it need access to all this for?”. I then immediately took steps to remove the app.

Removing the app

So, disabling and removing the app was not as straightforward as you would think and this was also quite annoying. The Chrome web store also includes ‘extensions’ to Chrome (the extensions gallery). These are not easily visible to a user as to where they’re installed. In fact, you have to go to the settings->tools->extensions to do anything about it. The normal installed Chrome apps are listed when you open a new tab (ctrl-t), but this is not the case for extensions.

Permissions by default

Having removed the app, I set about investigating precisely what I had exposed this app to and the implications. Under the “Learn more” link, I found a full description of permissions that could be allowed by an application. I had to cross-reference these back to what the app / extension had asked for. The picture below shows the permissions (expanded) for the Super Mario 2 game.

I don’t want to go into great detail about the ins and outs of what some people would term “informed consent” or “notified consent”, but the bottom line is that a hell of a lot is being given away with very little responsibility on Google’s part. After all, to the average user, the Chrome ‘chrome’ is an implicit guarantor of trust. A Google app store, the apps must have been checked out by Google, right?

I also won’t go into the top line “All data on your computer…” which installs an NPAPI plug-in which is essentially gameover in terms of access to your computer. To be fair to Google, their developer guidelines (below) state that any applications using this permission will be manually checked by Google. However, there is an implication there that the other applications and extensions aren’t.

So let’s concentrate on the permissions that are requested by the game.

  1. The first one, ‘Your bookmarks’ allows not only reading, but modification and additions to your bookmarks. Want setting up for something anyone? A legitimate link to your bank going to a phishing site?
  2. The second item, ‘Your browsing history’ for most people is going to reveal a lot. Very quickly, a motivated attacker is going to know where you live from your searches on google maps, illnesses you’re suffering and so on. There is a note here that this permission request is ‘often a by-product of an item needing to opening new tabs or windows’. Most engineers would call this, frankly, a half-arsed effort.
  3. The third item, ‘Your data on all websites’ seems to give permission for the application to access anything that I’m accessing. Then, the big yellow caution triangle: ‘Besides seeing all your pages, this item could use your credentials (cookies) to request your data from websites’. Woah. Run that one by me again? That’s a pretty big one. So, basically your attacker is home and dry. Lots of different types of attack exist to intercept cookies which will automatically authenticate a user to a website. This has been demonstrated against high-profile sites such as twitter and facebook by using tools such as firesheep. Given that it is a major threat vector, surely Google would have properly considered this in their permissioning and application acceptance model?

It’s pretty obvious how potentially bad the Mario extension could be, particularly when this is supposed to be just a flash game. What really irks me though is the ‘permissions by default’ installation. You click one button and it’s there, almost immediately with no prompt. Now, I’m not the greatest fan of prompts, but there are times when prompts are appropriate and install time is actually one of them. It gives me the chance to review what I’ve selected and make a decision, especially if I hadn’t spotted that information on a busy and cluttered webpage. I hear you all telling me that no-one reviews permissions statements in Android apps, so why would they do it here and yes, I partially agree. Human behaviour is such that if there is a hurdle in front of us and the motivation to go after the fantastic ‘dancing pigs’ application is sufficiently high, we’ll jump over the hurdle at any cost. There is also a danger that developers will go down the route they have with facebook applications – users accept all the permissions or you don’t get dancing pigs. Users will more than likely choose dancing pigs (see here for more info on dancing pigs).

The beauty of a well designed policy framework

So we’re not in an ideal world and everyone knows that. I firmly believe that there is a role for arbitration. Users are not security experts and are unlikely to make sensible decisions when faced with a list of technical functionality. However, the user must be firmly in control of the ultimate decision of what goes on their machine. If users could have a little security angel on their shoulder to advise them what to do next, that would give them much more peace of mind. This is where configurable policy frameworks come in. A fair bit of work has gone on in this area in the mobile industry through OMTP’s BONDI (now merged with JIL to become WAC) and also in the W3C (and sadly just stopped in the Device APIs and Policy working group). The EU webinos project is also looking at a policy framework. The policy framework acts in its basic sense as a sort of firewall. It can be configured to blacklist or whitelist URIs to protect the user from maliciousness, or it can go to a greater level of detail and block access to specific functionality. In combination with well-designed APIs it can act in a better way than a firewall – rather than just blocking access it gives a response to the developer that the policy framework prevented access to the function (allowing the application to gracefully fail rather than just hang). Third party providers that the user trusts (such as child protection charities, anti-virus vendors and so on) could provide policy to the user which is tailored to their needs. ‘Never allow my location to be released’, ‘only allow googlemaps to see my location’, ‘only allow a list of companies selected by ‘Which?’ to use tracking cookies’ – these are automated policy rules which are more realistic and easy for users to understand and which actually assist and advance user security.

Lessons for Google

Takedown – Looking at some of the comments from users on the Super Mario game, it is pretty clear people aren’t happy, with people mentioning the word virus, scam etc. The game has been up there since April – at the end of May, why haven’t Google done anything about it? The game doesn’t seem to be official, so it is highly likely to be in breach of Nintendo’s copyright. Again, why is this allowed in the Chrome web store? Is there any policing at all of the web store? Do Google respond to user reports of potentially malicious applications in a timely manner?

Permissions and Access – You should not have to open up permissions to your entire browsing history for an application to open a new tab! This is really, really bad security and privacy design.

Given what is happening with the evident permissiveness of Android and the Chrome web store, Google would do well to sit up and start looking some better solutions otherwise they could be staring regulation in the face.

Bootnote

I mentioned this to F-Secure’s Mikko Hypponen (@mikkohypponen) on Twitter and there were some good responses from his followers. @ArdaXi quite fairly pointed out that just to open a new window, a developer needed the to allow Chrome permission to access ‘Your browsing history’ (as discussed above). @JakeLSlater made the point that “google seem to be suggesting content not their responsibility, surely if hosted in CWS it has to be?” – I’m inclined to agree, they have at least some degree of responsibility if they are promoting it to users.

I notice that Google seem to have removed the offending application from the web store too. I think this followed MSNBC’s great article ‘Super Mario’ runs amok in Chrome Web app store after they picked up on my link through Mikko. I think it may be fair to say that the extension has been judged malicious.