Today, the 15th of April 2022, marks the 110th anniversary of the tragic sinking of the RMS Titanic.
In 2013, I gave a Pecha Kucha talk in the Titanic museum after the CSIT security conference on the role of the wireless telegraph during the disaster. It’s both a good and bad story – it highlights the many (many!) failings, but it also demonstrated the benefits of wireless communications during a disaster.
Just to give you a flavour of the multitude of things that went wrong or contributed to the sinking – the locker in the crows nest that held the binoculars was locked and inaccessible due to an officer leaving the ship in Southampton on the 9th of April and taking the key with him. This was seen as a contributing factor to the disaster.
I’ve posted the script with the images from the talk below (with some small additions).
The Role of the Wireless Telegraph During the Titanic Disaster – David Rogers Pecha Kucha talk
This is the last picture taken of the Titanic as it left Queenstown. The priest who took it could have stayed on board, but after seeking permission his superior sent him a telegram ordering him to “GET OFF THAT SHIP”. He spent the rest of his life telling people “it was the only time holy obedience ever saved a man’s life”.
Marconi and the telegraph
These two gentlemen were Titanic’s telegraph operators. John “Jack” Phillips, 25, known as sparks because he could send morse so fast, and Harold Bride, his junior who was just 22.
Messages would include getting the news, passenger’s personal messages and information from other ships such as ice reports, fog and reports of derelicts.
Additional note here: the plan below (I took the photo in the Titanic museum in Belfast) shows where the Marconi room was in relation to the bridge:
Leaving the Titanic
Arrival in New York
Sale of the story
Reporting the sinking
Some additional thoughts
I really recommend visiting the Titanic Museum in Belfast, it is really well done and incredibly interesting both for learning about the tragic events of the 14/15th of April 1912 and also the social and engineering history behind Titanic and its passengers. They also have a section on the telegraph messages that night too.
I also highly recommend visiting the Titanic Exhibition in Las Vegas at the Luxor. I couldn’t take any pictures inside the exhibition, but it is really something else – a lot of recovered artefacts including a huge part of the side of the ship give a real insight into what it was like, again well worth a visit if you’re in Vegas.
Last, but definitely not least is the book ‘Titanic Calling: Wireless Communications During the Great Disaster‘, edited by Michael Hughes and Katherine Bosworth, and published by the Bodleian Library in Oxford which is where the Marconi archive lives. This was published to mark the 100th anniversary of the disaster and is an incredible insight into all of the communications between the different ships.
Here’s my tea cup from dinner at the Titanic Museum in Belfast.
I was reminded today that today was a long time coming. The person who triggered this was someone that I worked with when I was at Panasonic and he was at Nokia. Twenty years ago, we were sat in one of the smallest meeting rooms at Panasonic Mobile, next to the smoking room as it was the only one available – the Head of Security Research from Vodafone, the Head of Security of GSMA, plus the Security Group Chair of GSMA and me.
The topic was hardware (IMEI) security and more broadly mobile phone security and how to deal with embedded systems hacking at an industry level. What kind of new measures could be brought in that would genuinely help to reduce the problem of mobile phone theft and make phones more secure? As they say, from small acorns, mighty oaks grow. I’d also argue it is probably quite a bit about persistence over a very long time.
It takes a very long time to make meaningful changes and while it’s easy to point out flaws, it’s harder to build new technology that addresses those in a game-changing way with complete industry buy-in. That’s pretty much what recommendations and standards bodies do, with the aim of seeking consensus – not complete agreement, but at least broad agreement on the means to effect large scale changes. Gradually and over a long period of time.
So we did that. Both in the Trusted Computing Group (TCG) and through the work of OMTP’s TR1: Advanced Trusted Execution Environment which led to chip-level changes across the industry and ushered in a new era of hardware security in the mobile phone industry, providing the foundation of future trust. All of this work nearly complete before an iPhone was on the market, I might add and well before Android! From our published work, we expected it to be in phones from around 2012 onwards and even then it took a little while before those OS providers hardened their systems sufficiently to be classed as really good security, but I should add that they have done a really good job of security leadership themselves since then.
With saturation in the smartphone space, around 2013/2014 the industry’s focus moved increasingly to the M2M (machine-to-machine) or IoT (Internet of Things) space, which had existed for a while but on a much smaller scale. A lot of things were coming together then – stuff was getting cheaper and more capable and it became increasingly viable to create more connected objects or things. But what we also saw were increasing numbers of companies ‘digitising’ – a washing machine vendor worried that they would be put out of business if they didn’t revolutionise their product by connecting it to the internet. That’s all well and good and I’m all for innovation, but the reality was that products were being put on the market that were really poor. With no experience of creating connected products, companies bought in ready-made solutions and platforms which came with little-to-no security measures. All the ports were exposed to the internet, default passwords were rife and never got changed, oh and software updates, what are they? It was and still is in many parts of the market, a mess.
Remember that this was new products being put into a market that was already a mess – for example, most webcams that had been sold for years were easy to access remotely and lots of tools had been created to make it even easier to discover and get into these devices, allowing intrusion into people’s private lives, their homes and their children.
Work began in organisations like the GSMA on creating security requirements for IoT that would force change. At the same time, hardware companies started to transfer their knowledge from the smartphone space into the hardware they were creating for the growing IoT sector. The IoT Security Foundation was established in late 2015 and the UK’s National Cyber Security Strategy from 2016-2021 stated that “the UK is more secure as a result of technology, products and services hacking cyber security designed into them by default”, setting us down the path that led us to the legislation introduction today. All of that work was an evolution and reinforcement of the growing body of product security recommendations that had already been created over a long period of time. Another thing I’ve observed is that in any particular time period, independent groups of people are exposed to the same set of issues, with the same set of tools and technologies at their disposal to rectify those issues. They therefore can all logically come to the same conclusions on things like how best to tackle the problem of IoT security.
In 2016, the Mirai attack happened (more info in the links below) and that helped to galvanise the support of organisations and politicians in understanding that large-scale insecurity in connected devices was a big and growing problem. A problem that was (mostly) easily solvable too. Other news stories and issues around IoT just added to this corpus of information that things weren’t well. You can also read more about the Code of Practice we created in the UK in the links below, but the key takeaway is this – there are small but fundamental changes that can raise the bar of cybersecurity substantially, reducing harm in a big way. This ranges from taking a firm stance on out-of-date and dangerous business practices e.g. companies and individuals being lazy, taking the easy route about things like default passwords and the hardware and software you use in your product development, to modernising the way that companies deal with security researchers – i.e. not threatening them and actually dealing with security issues that are reported by the good guys. So creating meaningful change is also about taking a stand against baked-in poor practice which has become endemic and so deeply entrenched throughout the world and its supply chains that it seems impossible to deal with.
I’ll never forget one meeting I was in where I presented a draft of the Code of Practice, where a guy from a technology company said “what we need is user education, not this”. I felt like I was on really solid ground when I was able to say “no, that’s rubbish. We need products that are built properly. For over 20 years, people have been saying we only need user education – it is not the answer”. I was empowered mainly because I could demonstrably show that user education hadn’t worked and perhaps that’s depressingly one of the reasons why we’re finally seeing change. Only in the face of obvious failure will things start to get better. But maybe I’m being too cynical. A head-of-steam was building for years. For example I was only able to win arguments about vulnerability disclosure and successfully countering “never talk to the hackers” because of the work of lots of people in the security research community who have fought for years to normalise vulnerability reporting to companies in the face of threats from lawyers and even getting arrested in some cases. And now we’re about to make it law that companies have to allow vulnerability reporting – and that they must act on it. Wow, just let that sink in for a second.
In the hacking and security research community, are some of the brightest minds and freest thinkers. The work of this community has been the greatest in effecting change. It may not be, in the words of someone I spoke to last week ‘professional’, when what I think they mean is ‘convenient’. The big splash news stories about hacks to insecure products actually force change in quite a big and public way and sadly the truth is that change wouldn’t have happened if it wasn’t for these people making it public, because it would have been mostly swept under the carpet by the companies. It is that inconvenient truth that often makes large companies uncomfortable – fundamental change is scary, change equals cost and change makes my job harder. I’m not sure this culture will ever really change, but uniquely in the tech world we have this counter-balance when it comes to security – we have people who actively break things and are not part of an established corporate ecosystem that inherently discourages change.
Over the past 10 years, we’ve seen a massive change in attitudes towards the hacking community as cyber security becomes a real human safety concern and our reliance on the internet becomes almost existential for governments and citizens. They’re now seen as part of the solution and governments have turned to the policy-minded people in that community to help them secure their future economies and to protect their vital services. The security research community also needs the lawyers and civil servants – because they know how to write legislation, they know how to talk to politicians and they can fit everything into the jigsaw puzzle of existing regulation, making sure that everything works! So what I’ve also had reinforced in me is a huge respect for the broad range of skills that are needed to actually get stuff done and most of those are not actually the engineering or security bit.
A lot of the current drive towards supporting product security is now unfortunately driven by fear. There is a big ticking clock when it comes to insecure connected devices in the market. The alarm attached to that ticking clock is catastrophe – it could be ransomware that as an onward impact causes large-scale deaths in short order or it could be major economic damage, whether deliberate or unintended. A ‘black swan of black swan events’ as my friend calls it. Whatever it is, it isn’t pretty. The initial warnings have been there for a while now from various cyber attacks and across a range of fronts, positive work has been taking place to secure supply chains, encourage ‘secure by design / default’ in the product development lifecycle and to increase resilience in networks – which is the right thing to do – the security should be commensurate with usage and in reality the whole world really, really relies on the internet for literally everything in their lives.
This is another factor in the success of current cyber security work around the world. I work with people from all corners of the earth, particularly in the GSMA’s Fraud and Security Group. Everyone has the same set of issues – there are fraudsters in every country, everyone is worried about their family’s privacy, everyone wants to be safe. This makes this topic less political in the IoT space than people would imagine and every country’s government wants their citizens to be safe. This is something that everyone can agree on and it makes standards setting and policy making a whole lot easier. With leadership from a number of countries (not just the UK, but I have to say I’m incredibly proud to be British when it comes to the great work on cyber security), we’re seeing massive defragmentation in standards such that we are seeing a broad global consensus on what good looks like and what we expect secure products and services to look like. If you step back and think about it – thousands and thousands of individuals working to make the world a safer place, for everyone. So the acorn twenty years ago was actually lots of acorns and the oak tree is actually a forest.
So to everyone working on IoT security around the world I raise a glass – Cheers! and keep up the fantastic work.
Fraudsters are using the Covid-19 crisis as bait to conduct SMS scams on a global scale. Many of these criminals are adapting their existing campaigns to exploit the situation.
Some of the examples we’ve seen on the twitter hashtag #covid19scamsms include text messages that trick recipients into divulging their personal and financial details based on lures of ‘goodwill payments’, ‘free home testing kits’ or ‘threats of a fine for breaking lockdown conditions’. In this post, we collate guidance from expert organizations and government agencies worldwide to help mobile phone users thwart such attacks as well as providing our own advice.
Firstly, what are the tell-tale signs to look out for?
It can be very difficult to work out whether a message is real or not. The reason for this is that fraudsters are trying to trick you into believing that a message is genuine. One of the problems with SMS is that the sender ID can be easily spoofed. This means that something that looks real — for example, the sender is a name rather than a number, and says something like: “US_Gov” — might not in fact be real. Here’s a list of other things that might suggest an SMS is suspect:
The message comes from an unrecognisable number.
The message contains misspelt or poorly worded phrases.
The message uses strange characters that look like legitimate letters (in order to avoid spam filters and get through to you).
The message contains a web link for you to go to.
The message requests payment or suggests you will receive money if you provide your details.
The message attempts to rush or panic you into taking immediate action.
The message uses doubtful or clearly false names of government agencies or organizations, either in the web link or the message text itself.
Next, what action should you take?
Never reply to the SMS or click on suspicious links. These could result in your phone being infected with malware or you losing money if you’re persuaded to enter credit card details or personal information such as addresses or passwords.
Don’t let anyone pressure you to make quick decisions. Stop and think; challenge the information provided in the SMS.
Only contact organizations using details obtained from official websites.
Check whether a government agency actually did send out messages to people. This might take a bit of searching on the web, but sometimes they’ll explain exactly what they sent. One example is the UK’s coronavirus SMS message.
If the message refers to a charity or non-profit, verify that the organization is registered – for example, in the US follow Federal Trade Commission advice, or in the UK search the charity register. Consider donating money via a different mechanism.
Keep your mobile phone’s software up-to-date to help reduce the chance that malware could exploit your device.
How can I help others?
We have started to tweet out some examples of these on twitter to help organizations around the world with gathering threat intelligence. The hashtag we are using is:
If you receive a message, in the first instance, you should try and report this to your network operator. They are best-placed to tackle the issue and initiate blocking measures. In many countries you can do this by forwarding the SMS to 7726 (more details provided below). It helps to do this – it is important that the operator knows you’ve received a message that isn’t legitimate because this will tell them that something has got through their filters.
We would encourage anyone who receives a scam SMS message to post a screenshot to the hashtag as a small way of assisting in tackling the problem. For example, the information contained in the message could be a web link to a malicious site which can be taken down before it can cause harm to lots of users. Please make sure you remove any identifying information such as your phone number before you post an image.
And finally, how can you report the fraudulent activity so that government agencies and mobile network operators can take action?
I recently wrote about the topic of SIM swapping on my company’s site. This was also posted to the GSMA’s Fraud & Security Group blog. There has been an increase in the amount of awareness of the issue over the last 18 months or so and I expect that to continue throughout 2020. Some factors are driving it – the recently published Princeton paper is probably the first scientific analysis of these problems, especially on the social engineering aspect. Others are the sheer life impact as I describe in my earlier blog – either a huge loss of money or life-takeover of all the victim’s online accounts.
Some feedback I received from industry colleagues on Linkedin is worth mentioning:
While I refer to ‘SIM swap’ – because that is the colloquial term we all understand, what is really happening is a re-assignment of the user’s credentials to access services, by the operator to another SIM card, rather than a specific issue with the SIM itself. It’s primarily a process and procedural issue.
Like many other cyber security issues we face (not just in telecoms), particularly for trans-national issues, there is almost a complete absence of law enforcement. I’m not just talking about action, but even basic interest would be useful. Where it comes to technical topics, it can be very difficult for the victim to describe it to the Police, but a lack of Police training and structure for dealing with cyber security issues means ultimately criminals get away with it. This perpetuates the cycle of crime. If it’s international, then probably nothing will happen.
The authentication of the real user is at the core of the issue – improving these procedures in line with the increased attack surface and asset value is overdue.
SMS 2FA is not the solution that should be recommended because SS7 is too vulnerable – I actually disagree with this one on the basis that as an interim solution it is easy for operators to deploy and would raise the bar significantly. SS7 attacks are much more difficult to conduct than social engineering and it ignores the fact that SS7 monitoring, controls and firewalls in-line with GSMA guidance have been and are being implemented across the world.
One side-point was made that SMS 2FA isn’t 2FA because the phone number isn’t something the user controls. I think this is not correct – the second factor is really a combination of “something you have (the phone that receives the message)” and “something you know (the code that is sent)”. This point also kind of ignores the practicalities of the problem – you need something that is going to work for millions of users. SMS 2FA is still the easiest and least worst solution for this. Arguably you’re sending the message ‘in-band’ and associated with the thing that is being targeted, however logically, at that point it is under the control of the authentic user. These days there are other channels the operator could possibly use which are sort-of ‘out-of-band’ and they should explore these – i.e. Whatsapp, Signal messages or using an authenticator app such as Duo. I would argue that at least for the last two of these, they’re still quite niche for the ordinary user and that raises complexity in the customer service chain, ultimately actually reducing security. It would also have to carefully thought through – attackers don’t remain static.
One point was made that “We have to stop knitting new applications with old technology” and “Same horse same speed… ” – I and others would agree with this. With 5G we had a real opportunity to make a clean break from legacy technologies, however it hasn’t happened. We’ll carry some of those problems with us. I guess there are some similar analogies to replacing lead pipes in houses and cities – it is an economic and practical upgrade problem. We’ll get there I think.
Other comments talked about regulation and putting the liability onto operators for the financial losses of users. It is really not that simple in my view. If the target of the service is someone’s email or breaking into the bank – does the network operator retain sole liability for that? We also have to remember that the issue here is the criminals doing this – let’s focus on them a bit more and start prosecuting them.
The staff in the Secure by Design team at DCMS have been working incredibly hard to move forward on the commitments to explore how to identify to consumers what good looks like when it comes to purchasing a connected product. Alongside this, there have been many discussions on the various different possibilities for regulation.
The Minister for Digital, Margot James has launched a consultation on new laws around the first three items in the Code of Practice – elimination of default passwords, responding to reported vulnerabilities and ensuring that software updates are provided, to a transparent end date for consumers.
The consultation is open until the 5th of June 2019 – views can be emailed to: email@example.com or via post to Department for Digital, Culture, Media and Sport, 4th Floor, 100 Parliament Street, London, SW1A 2BQ.
The consultation states:
“We recognise that security is an important consideration for consumers. A recent survey of 6,482 consumers has shown that when purchasing a new consumer IoT product, ‘security’ is the third most important information category (higher than privacy or design) and among those who didn’t rank ‘security’ as a top-four consideration, 72% said that they expected security to already be built into devices that were already on the market.”
Importantly and one component of what we need to work to solve is this issue:
“It’s clear that there is currently a lack of transparency between what consumers think they are buying and what they are actually buying.”
Identifying products that have been designed with security in mind
As the cartoon below demonstrates – explaining security to consumers is difficult and could confuse and scare people, so a balance needs to be found. What the government is proposing in its consultation is to provide a label that explains some measurable elements about the security design approach of that product.
So how do you go about identifying how secure something is?
The answer is – with great difficulty. Even more so in the modern world, because the security properties of a device and service are not static.
To explain this a bit further – all technology will contain vulnerabilities that are not known about yet. These could be issues that are known types of security vulnerability, but that are buried and haven’t been caught during the design and testing process. When you have thousands, maybe even millions of lines of code, written by multiple people and from different companies, this isn’t unexpected. For every piece of software there will be a certain number of bugs, some of these will be security vulnerabilities and a smaller sub-set of these will be “exploitable” vulnerabilities – i.e. those that an attacker can use to do something useful (from their perspective!) to the system.
So this shows why software updates are critically important – in fact even some of those bugs that are not exploitable could in the future become exploitable, so deploying software updates in a preventative manner is a hygienic practice. It is a form of inoculation, because we all benefit from systems being patched, it reduces the number of systems that will be impacted in the future and therefore reduces the potency of attacks which have a major global impact. This of course is paramount in the internet of things, because everything is connected and the onward impact on peoples’ lives could become safety-impacting in some way. We have moved past the time where systems being disabled or unavailable were an inconvenience.
So what does a label give us? Well at this stage – what we can do is help a consumer make an informed purchasing decision. Answering questions like “how long does this device get security updates for?” is really useful. It also means that those companies that have no interest in providing updates (even though they’re critical to provide) can no longer hide behind anything. It’s there for the buyer to see – if you don’t provide the updates, the consumer is free to choose not to buy your product. Not really good business to ship rubbish anymore is it?
Regulation of the Code of Practice security measures
The intention by the government is to pass the Code of Practice measures into law over time. On the regulatory side of the top three from the Code of Practice, the government has boiled down the consultation to three potential options:
“ ● Option A: Mandate retailers to only sell consumer IoT products that have the IoT security label, with manufacturers to self declare and implement a security label on their consumer IoT products. ● Option B: Mandate retailers to only sell consumer IoT products that adhere to the top three guidelines, with the burden on manufacturers to self declare that their consumer IoT products adhere to the top three guidelines of the Code of Practice for IoT Security and the ETSI TS 103 645. ● Option C: Mandate that retailers only sell consumer IoT products with a label that evidences compliance with all 13 guidelines of the Code of Practice, with manufacturers expected to self declare and to ensure that the label is on the appropriate packaging. “
From a personal perspective, I find it fantastic that we’ve reached the point where we can get rid of a lot of the products that are blighting the market with blatant insecurity. Good riddance I say and let’s celebrate the companies that are really paying attention to consumer security.
The security label will be run on a voluntary basis by retailers until regulation comes into force and legislative options are taken forward. The consultation also includes example designs that could be used. Interestingly when DCMS carried out a survey into what types of icons would be best, a padlock option was selected by less than 1% of participants. To me, what this reflects about the state of browser and web security and how we communicate security to users is somewhat depressing, but it serves as a reminder that trust is hard to earn, but easily lost.
This work is just another step down the road for globally improving IoT security. Again, it’s not the be all and end all, but it is a positive step and yet another example that the UK is leading the world by taking action, not just talking about IoT security.
Today marks the launch of the Code of Practice for Consumer IoT Security following a period of public consultation. You can find out more on the Department for Digital, Culture, Media & Sport’s (DCMS) website. The publication also means that the UK is now way ahead of the rest of the world in terms of leadership on improving IoT security and privacy.
As the original and lead author of the Code of Practice, I was really pleased to read the feedback and see that many other people feel the same way about improving the situation globally. I was able to discuss the feedback at length with colleagues from DCMS, the National Cyber Security Centre (NCSC) and other departments to ensure that we were creating a sensible measured set of guidance that took into account the needs and concerns of all stakeholders.
For further details on what the Code of Practice contains and why it exists, have a look at some of my previous blogs on this topic:
A number of other documents are being released today, all of which are well worth a read if you’re interested in this space.
Mapping Recommendations and Standards in the IoT security and privacy space
The thing that my team and I spent the most effort on over the summer period was mapping existing recommendations on IoT security and privacy from around the world against the Code of Practice. This was no mean feat and meant going through thousands of pages of pretty dry text. If you talk to anyone in the industry space, it is a job that everyone knew needed doing but nobody wanted to do it. Well I can say it is done now (thank you Ryan and Mark particularly!), but things like this are the never ending task. While we were working on it, new recommendations were being released and inevitably, just after we’d completed our work others were published. Equally, we ran the risk of mapping the entirety of the technical standards space. For now at least, we’ve stopped short of that and I think we’ve given implementers enough information such that they’ll be able to understand what commonalities there are across different bodies and where to look. I still am sufficiently sane to state that I’ll commit to keeping this updated, but we’ll let the initial dataset be used by companies first. Ultimately I’m hoping this is the tool that will aid defragmentation in the IoT security standards space and again I’ll continue to support this effort.
I’m really pleased that the government agreed with the suggestion that we should make the mappings available as open data. We’ve also created visual mappings just to make things a little more readable. All of this is hosted at https://iotsecuritymapping.uk which is now live.
Mapping recommendations to the UK’s Code of Practice for Consumer IoT Security
Talking about the Code of Practice
I also continued to spend time discussing what we were doing with various security researchers and presented at both B-SidesLV in Las Vegas and at 44con in London. I also spoke to a number of different industry groups to explain what we were doing and what is happening next.
Most IoT products v Skilled hackers
I often used this picture, partly because it is of my cat Pumpkin, partly because it illustrates the reality of most companies that are looking to digitise their products. Their new shiny connected products are on the left protected by not a lot, whilst the skilled attackers sit ready to pounce. The mobile industry has been in a cat and mouse game (stay with me here) with hackers and crackers for around 20 years now. Broadly speaking, the mobile device is a hard target and there are some great engineers working in product security across the mobile industry. Take then the washing machine industry, just as an example. What experience does a company that produces washing machines have in device and internet security? Very little is the answer. Startups are encouraged to ship unfinished products and there is a continued prevailing attitude that companies can get away with doing and spending very little on security. It is no surprise that these products are easily broken and cause consumers significant security and privacy harm, further degrading consumer trust overall in connected products.
There is a lot of support to get poorly secured products off the market. The recent research by Andrew Tierney on the TappLock is just another demonstration of some of the rubbish that is allowed to be sold. I was pleased to see that it had been de-listed by Amazon.
We still have some way to go – the public feedback on the UK government’s IoT security Code of Practice is being reviewed right now, then it’s onto next steps following the publication. My own personal feeling is that this is not about creating lots of new things in terms of security – much of what needs to be done is already written down and we know what good looks like. What it is a lot about is adoption and enforcement. How we check that runs us down the rabbit hole of how to do compliance (cf. the upcoming EU Cyber Security Act). I would prefer that we take the low-hanging fruit for now in order to improve things significantly and really quickly. The top three of the UK’s Code of Practice are easily testable. If you don’t have these as a vendor you’re going to be in serious trouble anyway, the rest almost don’t matter at that point.
How a consumer can check a product hasn’t been designed with security in mind
The good thing about this is that even a consumer could check those things – if any of these three things are missing, my view is don’t buy the product:
1) Does it have a default password? (I don’t want it).
2) Can people report security vulnerabilities to the manufacturer? (check website – no? I don’t want it).
3) Can I update the software and for a period that I know about? (No – I don’t want it).
I’ve described these things before as insecurity canaries – if the vendor is not adhering to some basic things that anyone can check, what does the rest of the product look like under the bonnet?
If you go to the /security page of Tapplock’s website you get a “coming soon” screen (yes I know, I thought that somewhat amusing too). So this already means I can’t easily report security vulnerabilities to them. To be fair, there is a lot of good practice out there that just needs adopting. There is a window of opportunity for IoT vendors and service providers to get it right before governments start bringing out the big stick. At the moment consumers are being defended by a small band of concerned security researchers who are demonstrating just how poorly secured some of these products really are.
I urge everyone interested to read the Secure by Design report plus the guidance notes within to see where things are going, especially the points about future consideration of regulation; and to understand that the Code of Practice is outcome based, in order to make it easily measurable by say a consumer group, not just engineering people like me. During the development of the report a huge number of people were consulted, including a lot of the security research community who provided invaluable advice and input.
On standards – I believe there is no need for additional standards in this space (that’s not what the Code of Practice is), but there is a need for existing standards from a range of bodies to be mapped against the outcomes. What we actually need is vendors to actually adopt the existing security standards within their products and to help them understand the inter-relation between standards a bit better. Mappings can be used by vendors to achieve the desired outcome of securely designed products that retailers feel confident to sell.
So don’t believe everything the noisy people say for a soundbite on the news – make up your own mind. More importantly the report is open for public feedback until the 25th of April, so make your voices known!
Today is a good day. The UK government has launched its Secure by Design report and it marks a major step forward for the UK for Internet of Things (IoT) security.
Embedded within the report is a draft “Code of Practice for Security in Consumer IoT Products and Associated Services”, which I authored in collaboration with DCMS and with input and feedback from various parties including the ICO and the NCSC.
I have been a passionate advocate of strong product security since I worked at Panasonic and established the product security function in their mobile phone division, through to the mobile recommendations body OMTP where, as the mobile industry we established the basis of hardware security and trust for future devices. We’re certainly winning in the mobile space – devices are significantly harder to breach, despite being under constant attack. This isn’t because of one single thing; it is multiple aspects of security built on the experiences of previous platforms and products. As technologies have matured, we’ve been able to implement things like software updates more easily and to establish what good looks like. Other aspects such as learning how to interact with security researchers or the best architectures for separating computing processes have also been learned over time.
Carrying over product security fundamentals into IoT
This isn’t the case however for IoT products and services. It feels in some cases like we’re stepping back 20 years. Frustratingly for those of us who’ve been through the painful years, the solutions already exist in the mobile device world for many of the problems seen in modern, hacked IoT devices. They just haven’t been implemented in IoT. This also applies to the surrounding ecosystem of applications and services for IoT. Time and again, we’re seeing developer mistakes such as a lack of certificate validation in mobile applications for IoT, which are entirely avoidable.
There is nothing truly ground-breaking within the Code of Practice. It isn’t difficult to implement many of the measures, but what we’re saying is that enough is enough. It is time to start putting houses in order, because we just can’t tolerate bad practice any more. For too long, vendors have been shipping products which are fundamentally insecure because no attention has been paid to security design. We have a choice. We can either have a lowest common denominator approach to security or we can say “this is the bar and you must at least have these basics in place”. In 2018 it just simply isn’t acceptable to have things like default passwords and open ports. This is how stuff like Mirai happens. The guidance addresses those issues and had it been in place, the huge impact of Mirai would simply not have occurred. Now is the time to act before the situation gets worse and people get physically hurt. The prioritisation of the guidance was something we discussed at length. The top three of elimination of the practice of default passwords, providing security researchers with a way to disclose vulnerabilities and keeping software updated were based on the fact that addressing these elements in particular, as a priority, will have a huge beneficial impact on overall cyber security, creating a much more secure environment for consumers.
We’re not alone in saying this. Multiple governments and organisations around the world are concerned about IoT security and are publishing security recommendations to help. This includes the US’s NIST, Europe’s ENISA and organisations such as the GSMA and the IoT Security Foundation. I maintain a living list of IoT security guidance from around the world on this blog.
So in order to make things more secure and ultimately safer (because a lot of IoT is already potentially life-impacting), it’s time to step things up and get better. Many parts of the IoT supply chain are already doing a huge amount on security and for those organisations, they’re likely already meeting the guidance in the code of practice, but it is evident that a large number of products are failing even on the basics.
Measuring security is always difficult. This is why we decided to create an outcomes-based approach. What we want is the ability for retailers and other parts of the supply chain to be easily able to identify what bad looks like. For some of the basic things like eliminating default passwords or setting up ways for security researchers to contact in the case of vulnerabilities, these can probably be seen as insecurity canaries – if the basics aren’t in place, what about the more complex elements that are more difficult to see or to inspect?
Another reason to focus on outcomes was that we were very keen to avoid stifling creativity when it came to security solutions, so we’ve avoided being prescriptive other than to describe best practice approaches or where bad practices need to be eliminated.
I am looking forward to developing the work further based on the feedback from the informal consultation on the Code of Practice. I support the various standards and recommendations mapping exercises going on which will fundamentally make compliance a lot easier for companies around the world. I am proud to have worked with such a forward-thinking team on this project and look forward to contributing further in the future.
I’ve also written about how the Code of Practice would have prevented major attacks on IoT:
This week it is Mobile World Congress, the biggest event in the mobile industry calendar. If you’re interested in meeting for a chat or just hearing about mobile and IoT security & privacy, I’ll be at the following places!
Copper Horse annual security dinner at a secret location in Barcelona
21:00-late (tweet me or message if you want to come along)
Monday 26th February
4YFN – “Hidden Threats and Opportunities to my Business”
Panelist: “Spotlight – How Data and Cyber Security can make or break a new business?”
16:15-17:15 4YFN (at the old Fira), Fira Barcelona Montjuïc, Av. Reina Maria Cristina
Tuesday 27th February
IoT Tuesday, hosted by Cellusys, supported by JT Group and the IoT Security Foundation
17:00-late Cellusys event – I’ll be giving an opening talk on behalf of the IoT Security Foundation, which will be: “The Ticking Clock”: why security in IoT is critical to how you run your business. Tweet me if you want to attend.
Wednesday 28th February
Why Should we Trust your Digital Security?
Me having a fireside chat in this session with Jean Gonie, VEON: Data, Consumer Protection and the GDPR
Auditorium 3, Hall 4 (on-site at MWC)
I’ll be at a few other events and will generally be around and about the MWC main site all week so please feel free to get in contact. Speaking of Barcelona, we’re holding our next training, “Foundations of IoT Security” in May in the city. More details and sign-up can be found on the IoTSF website.