The Wireless Telegraph and the Titanic

Today, the 15th of April 2022, marks the 110th anniversary of the tragic sinking of the RMS Titanic.

In 2013, I gave a Pecha Kucha talk in the Titanic museum after the CSIT security conference on the role of the wireless telegraph during the disaster. It’s both a good and bad story – it highlights the many (many!) failings, but it also demonstrated the benefits of wireless communications during a disaster.

Just to give you a flavour of the multitude of things that went wrong or contributed to the sinking – the locker in the crows nest that held the binoculars was locked and inaccessible due to an officer leaving the ship in Southampton on the 9th of April and taking the key with him. This was seen as a contributing factor to the disaster.

I’ve posted the script with the images from the talk below (with some small additions).

The Role of the Wireless Telegraph During the Titanic Disaster – David Rogers Pecha Kucha talk

This is the last picture taken of the Titanic as it left Queenstown. The priest who took it could have stayed on board, but after seeking permission his superior sent him a telegram ordering him to “GET OFF THAT SHIP”. He spent the rest of his life telling people “it was the only time holy obedience ever saved a man’s life”.

Marconi and the telegraph

This is Guglielmo Marconi. He made the first transatlantic transmission and commercialised a lot of his work heavily. His company provided the telegraph aboard Titanic.

Marconi was due to travel on the Titanic but instead travelled earlier on the Lusitania, another yet to be infamous ship.

Wireless operators

These two gentlemen were Titanic’s telegraph operators. John “Jack” Phillips, 25, known as sparks because he could send morse so fast, and Harold Bride, his junior who was just 22.

Messages would include getting the news, passenger’s personal messages and information from other ships such as ice reports, fog and reports of derelicts.

Marconi rooms

This is the only known picture of the ‘Marconi room’, the radio room onboard the Titanic.

The wireless equipment of the Titanic was the most powerful of any merchant vessel. Communication range was up to 400 miles and at night range often increased to 2000 miles.

Titanic’s wireless…

Here you can see Titanic’s wireless antenna, running from bow to stern, with a section in the middle.

The radio room was some 40ft away from the bridge down a corridor and despite being connected to 50 telephone lines there was no phone line to the bridge.

Additional note here: the plan below (I took the photo in the Titanic museum in Belfast) shows where the Marconi room was in relation to the bridge:

Ice Warnings

Titanic had a number of radio warnings about ice. Only one warning was put on the officers’ notice board, but none of them were taken up to the Bridge to Captain Smith. Shortly after the last warning the Titanic struck an iceberg at full speed.

These pictures are thought to be the actual iceberg that Titanic hit, taken the day after streaked with red paint.

The Californian

The SS Californian was very close to Titanic and sent that last warning, saying they were surrounded by ice. But on the Titanic, Phillips was busy with a backlog of passenger messages to Newfoundland and told the Californian to “keep out and shut up” because his wireless reception was being drowned out by the strong transmission of the closer ship.

SOS

It was over half an hour after they’d hit the ice berg before Captain Smith ordered the sending of the distress call. At 12.20pm the first SOS was sent from the Titanic.

After the first CQD (the old type of distress call), Bride said to Phillips: “Send S.O.S. !  It’s the new call and it may be your last chance to send it”.

Californian continued…

The Californian’s only wireless operator had turned off his set and went to bed about 15 minutes before Titanic struck the ice berg. The two ships were only 6 miles apart. Crew on the Californian apparently saw lights and rockets but assumed there was a party. At 5am their wireless operator was woken up and only then did they learn the fate of the Titanic.

Wireless response

SOS had no specific meaning other than that it was easy to tap out and understand. However, many different meanings were attached e.g. ‘save our souls’, ‘send out succor’ etc.

As you can see here, a number of different ships responded to Titanic’s strong SOS.

Hostile messages

A lot of ships were sending messages to the Titanic. Not all of these were helpful! The first message here is from the German ship the Frankfurt whose wireless operator worked for Marconi’s fierce competitor, Telefunken. You can see the strong response from the Titanic, just a few minutes before it actually sunk. At the time not even emergency messages were shared with the competition!

Communications

Apparently the SOS sent by the Titanic was also picked up by a radio ham in Wales. He reported it to the local police who predictably didn’t believe him!

The sheer quantity of non-critical messages was huge. Marconi would make a lot of money from passengers sending messages during the voyage and they didn’t come cheap!

Leaving the Titanic

About 2:20a.m, the last SOS message was sent from Titanic. “We are sinking fast”. Bride and Phillips were told to leave by Captain Smith. They had 3 minutes before the ship sunk.

Bride made it onto an upturned lifeboat which was washed out to sea as the bow of Titanic went under. It is not clear what then happened to Phillips.

Sinking

At this point, the ship apparently broke apart at about the aft Grand staircase (about where we’re standing) [additional note: there is a replica of the grand staircase in the museum]. Out of 2223 passengers, over 1500 died. With too few lifeboats and a cancelled drill the day before, it was a mess. Lifeboat One had a capacity of 40 but only had 12 people in.

Carpathia

The RMS Carpathia was the only boat to pick up survivors arriving at 4.am., two hours after Titanic had sunk. Harold Bride was seriously injured but helped the wireless operator of the Carpathia send out messages. There was some difficulty sending survivor lists because of clogs in traffic and the sheer length.

Arrival in New York

Carpathia arrived in New York on the 18th of April, three days after the sinking. Here Harold Bride is helped off the ship, with one foot partially crushed and suffering from severe frostbite. The wireless operators were heroes.

Sale of the story

Marconi was a very PR savvy man. He arranged for Harold Bride to have an exclusive interview with the New York Times for which Bride was paid $500. It could be seen as damage limitation however it is true that everyone would have died if it wasn’t for the wireless telegraph. This is Bride putting Philips’ lifebelt on.

Reporting the sinking

Reporters offered to pay vast sums for the stories of the sinking and of the famous people onboard, whilst Carpathia was still at sea. Telegraphs to the ship offered at least five hundred dollars a column, with one offering an unlimited quantity, however many of these messages just didn’t get through due to higher priority traffic.

Cyber Titanic

I’ve shown a number of failings here. Cyber themes today would include issues of incident handling, standardisation, new technology, drills, internationally understood procedures, warning escalation and the pain of media involvement, however I wonder what a “Cyber Titanic” would look like and what would it be?
 
Thanks for listening.

Some additional thoughts

I really recommend visiting the Titanic Museum in Belfast, it is really well done and incredibly interesting both for learning about the tragic events of the 14/15th of April 1912 and also the social and engineering history behind Titanic and its passengers. They also have a section on the telegraph messages that night too.

I also highly recommend visiting the Titanic Exhibition in Las Vegas at the Luxor. I couldn’t take any pictures inside the exhibition, but it is really something else – a lot of recovered artefacts including a huge part of the side of the ship give a real insight into what it was like, again well worth a visit if you’re in Vegas.

Last, but definitely not least is the book ‘Titanic Calling: Wireless Communications During the Great Disaster‘, edited by Michael Hughes and Katherine Bosworth, and published by the Bodleian Library in Oxford which is where the Marconi archive lives. This was published to mark the 100th anniversary of the disaster and is an incredible insight into all of the communications between the different ships.

Here’s my tea cup from dinner at the Titanic Museum in Belfast.

The Long Road to a Law on Product Security in the UK

As the UK’s Product Security and Telecommunications Infrastructure Bill entered Parliament today, I had some time to reflect on how far we’ve come.

I was reminded today that today was a long time coming. The person who triggered this was someone that I worked with when I was at Panasonic and he was at Nokia. Twenty years ago, we were sat in one of the smallest meeting rooms at Panasonic Mobile, next to the smoking room as it was the only one available – the Head of Security Research from Vodafone, the Head of Security of GSMA, plus the Security Group Chair of GSMA and me.

The topic was hardware (IMEI) security and more broadly mobile phone security and how to deal with embedded systems hacking at an industry level. What kind of new measures could be brought in that would genuinely help to reduce the problem of mobile phone theft and make phones more secure? As they say, from small acorns, mighty oaks grow. I’d also argue it is probably quite a bit about persistence over a very long time.

It takes a very long time to make meaningful changes and while it’s easy to point out flaws, it’s harder to build new technology that addresses those in a game-changing way with complete industry buy-in. That’s pretty much what recommendations and standards bodies do, with the aim of seeking consensus – not complete agreement, but at least broad agreement on the means to effect large scale changes. Gradually and over a long period of time.

So we did that. Both in the Trusted Computing Group (TCG) and through the work of OMTP’s TR1: Advanced Trusted Execution Environment which led to chip-level changes across the industry and ushered in a new era of hardware security in the mobile phone industry, providing the foundation of future trust. All of this work nearly complete before an iPhone was on the market, I might add and well before Android! From our published work, we expected it to be in phones from around 2012 onwards and even then it took a little while before those OS providers hardened their systems sufficiently to be classed as really good security, but I should add that they have done a really good job of security leadership themselves since then.

With saturation in the smartphone space, around 2013/2014 the industry’s focus moved increasingly to the M2M (machine-to-machine) or IoT (Internet of Things) space, which had existed for a while but on a much smaller scale. A lot of things were coming together then – stuff was getting cheaper and more capable and it became increasingly viable to create more connected objects or things. But what we also saw were increasing numbers of companies ‘digitising’ – a washing machine vendor worried that they would be put out of business if they didn’t revolutionise their product by connecting it to the internet. That’s all well and good and I’m all for innovation, but the reality was that products were being put on the market that were really poor. With no experience of creating connected products, companies bought in ready-made solutions and platforms which came with little-to-no security measures. All the ports were exposed to the internet, default passwords were rife and never got changed, oh and software updates, what are they? It was and still is in many parts of the market, a mess.

Remember that this was new products being put into a market that was already a mess – for example, most webcams that had been sold for years were easy to access remotely and lots of tools had been created to make it even easier to discover and get into these devices, allowing intrusion into people’s private lives, their homes and their children.

Work began in organisations like the GSMA on creating security requirements for IoT that would force change. At the same time, hardware companies started to transfer their knowledge from the smartphone space into the hardware they were creating for the growing IoT sector. The IoT Security Foundation was established in late 2015 and the UK’s National Cyber Security Strategy from 2016-2021 stated that “the UK is more secure as a result of technology, products and services hacking cyber security designed into them by default”, setting us down the path that led us to the legislation introduction today. All of that work was an evolution and reinforcement of the growing body of product security recommendations that had already been created over a long period of time. Another thing I’ve observed is that in any particular time period, independent groups of people are exposed to the same set of issues, with the same set of tools and technologies at their disposal to rectify those issues. They therefore can all logically come to the same conclusions on things like how best to tackle the problem of IoT security.

In 2016, the Mirai attack happened (more info in the links below) and that helped to galvanise the support of organisations and politicians in understanding that large-scale insecurity in connected devices was a big and growing problem. A problem that was (mostly) easily solvable too. Other news stories and issues around IoT just added to this corpus of information that things weren’t well. You can also read more about the Code of Practice we created in the UK in the links below, but the key takeaway is this – there are small but fundamental changes that can raise the bar of cybersecurity substantially, reducing harm in a big way. This ranges from taking a firm stance on out-of-date and dangerous business practices e.g. companies and individuals being lazy, taking the easy route about things like default passwords and the hardware and software you use in your product development, to modernising the way that companies deal with security researchers – i.e. not threatening them and actually dealing with security issues that are reported by the good guys. So creating meaningful change is also about taking a stand against baked-in poor practice which has become endemic and so deeply entrenched throughout the world and its supply chains that it seems impossible to deal with.

I’ll never forget one meeting I was in where I presented a draft of the Code of Practice, where a guy from a technology company said “what we need is user education, not this”. I felt like I was on really solid ground when I was able to say “no, that’s rubbish. We need products that are built properly. For over 20 years, people have been saying we only need user education – it is not the answer”. I was empowered mainly because I could demonstrably show that user education hadn’t worked and perhaps that’s depressingly one of the reasons why we’re finally seeing change. Only in the face of obvious failure will things start to get better. But maybe I’m being too cynical. A head-of-steam was building for years. For example I was only able to win arguments about vulnerability disclosure and successfully countering “never talk to the hackers” because of the work of lots of people in the security research community who have fought for years to normalise vulnerability reporting to companies in the face of threats from lawyers and even getting arrested in some cases. And now we’re about to make it law that companies have to allow vulnerability reporting – and that they must act on it. Wow, just let that sink in for a second.

In the hacking and security research community, are some of the brightest minds and freest thinkers. The work of this community has been the greatest in effecting change. It may not be, in the words of someone I spoke to last week ‘professional’, when what I think they mean is ‘convenient’. The big splash news stories about hacks to insecure products actually force change in quite a big and public way and sadly the truth is that change wouldn’t have happened if it wasn’t for these people making it public, because it would have been mostly swept under the carpet by the companies. It is that inconvenient truth that often makes large companies uncomfortable – fundamental change is scary, change equals cost and change makes my job harder. I’m not sure this culture will ever really change, but uniquely in the tech world we have this counter-balance when it comes to security – we have people who actively break things and are not part of an established corporate ecosystem that inherently discourages change.

Over the past 10 years, we’ve seen a massive change in attitudes towards the hacking community as cyber security becomes a real human safety concern and our reliance on the internet becomes almost existential for governments and citizens. They’re now seen as part of the solution and governments have turned to the policy-minded people in that community to help them secure their future economies and to protect their vital services. The security research community also needs the lawyers and civil servants – because they know how to write legislation, they know how to talk to politicians and they can fit everything into the jigsaw puzzle of existing regulation, making sure that everything works! So what I’ve also had reinforced in me is a huge respect for the broad range of skills that are needed to actually get stuff done and most of those are not actually the engineering or security bit.

A lot of the current drive towards supporting product security is now unfortunately driven by fear. There is a big ticking clock when it comes to insecure connected devices in the market. The alarm attached to that ticking clock is catastrophe – it could be ransomware that as an onward impact causes large-scale deaths in short order or it could be major economic damage, whether deliberate or unintended. A ‘black swan of black swan events’ as my friend calls it. Whatever it is, it isn’t pretty. The initial warnings have been there for a while now from various cyber attacks and across a range of fronts, positive work has been taking place to secure supply chains, encourage ‘secure by design / default’ in the product development lifecycle and to increase resilience in networks – which is the right thing to do – the security should be commensurate with usage and in reality the whole world really, really relies on the internet for literally everything in their lives.

This is another factor in the success of current cyber security work around the world. I work with people from all corners of the earth, particularly in the GSMA’s Fraud and Security Group. Everyone has the same set of issues – there are fraudsters in every country, everyone is worried about their family’s privacy, everyone wants to be safe. This makes this topic less political in the IoT space than people would imagine and every country’s government wants their citizens to be safe. This is something that everyone can agree on and it makes standards setting and policy making a whole lot easier. With leadership from a number of countries (not just the UK, but I have to say I’m incredibly proud to be British when it comes to the great work on cyber security), we’re seeing massive defragmentation in standards such that we are seeing a broad global consensus on what good looks like and what we expect secure products and services to look like. If you step back and think about it – thousands and thousands of individuals working to make the world a safer place, for everyone. So the acorn twenty years ago was actually lots of acorns and the oak tree is actually a forest.

So to everyone working on IoT security around the world I raise a glass – Cheers! and keep up the fantastic work.

My RSA talk on the UK’s Code of Practice for Consumer IoT Security in 2019.

Further reading:

Further Thoughts on SIM Swap

I recently wrote about the topic of SIM swapping on my company’s site. This was also posted to the GSMA’s Fraud & Security Group blog. There has been an increase in the amount of awareness of the issue over the last 18 months or so and I expect that to continue throughout 2020. Some factors are driving it – the recently published Princeton paper is probably the first scientific analysis of these problems, especially on the social engineering aspect. Others are the sheer life impact as I describe in my earlier blog – either a huge loss of money or life-takeover of all the victim’s online accounts.

Some feedback I received from industry colleagues on Linkedin is worth mentioning:

  • While I refer to ‘SIM swap’ – because that is the colloquial term we all understand, what is really happening is a re-assignment of the user’s credentials to access services, by the operator to another SIM card, rather than a specific issue with the SIM itself. It’s primarily a process and procedural issue.
  • Like many other cyber security issues we face (not just in telecoms), particularly for trans-national issues, there is almost a complete absence of law enforcement. I’m not just talking about action, but even basic interest would be useful. Where it comes to technical topics, it can be very difficult for the victim to describe it to the Police, but a lack of Police training and structure for dealing with cyber security issues means ultimately criminals get away with it. This perpetuates the cycle of crime. If it’s international, then probably nothing will happen.
  • The authentication of the real user is at the core of the issue – improving these procedures in line with the increased attack surface and asset value is overdue.
  • SMS 2FA is not the solution that should be recommended because SS7 is too vulnerable – I actually disagree with this one on the basis that as an interim solution it is easy for operators to deploy and would raise the bar significantly. SS7 attacks are much more difficult to conduct than social engineering and it ignores the fact that SS7 monitoring, controls and firewalls in-line with GSMA guidance have been and are being implemented across the world.
  • One side-point was made that SMS 2FA isn’t 2FA because the phone number isn’t something the user controls. I think this is not correct – the second factor is really a combination of “something you have (the phone that receives the message)” and “something you know (the code that is sent)”. This point also kind of ignores the practicalities of the problem – you need something that is going to work for millions of users. SMS 2FA is still the easiest and least worst solution for this. Arguably you’re sending the message ‘in-band’ and associated with the thing that is being targeted, however logically, at that point it is under the control of the authentic user. These days there are other channels the operator could possibly use which are sort-of ‘out-of-band’ and they should explore these – i.e. Whatsapp, Signal messages or using an authenticator app such as Duo. I would argue that at least for the last two of these, they’re still quite niche for the ordinary user and that raises complexity in the customer service chain, ultimately actually reducing security. It would also have to carefully thought through – attackers don’t remain static.
  • One point was made that “We have to stop knitting new applications with old technology” and “Same horse same speed… ” – I and others would agree with this. With 5G we had a real opportunity to make a clean break from legacy technologies, however it hasn’t happened. We’ll carry some of those problems with us. I guess there are some similar analogies to replacing lead pipes in houses and cities – it is an economic and practical upgrade problem. We’ll get there I think.
  • Other comments talked about regulation and putting the liability onto operators for the financial losses of users. It is really not that simple in my view. If the target of the service is someone’s email or breaking into the bank – does the network operator retain sole liability for that? We also have to remember that the issue here is the criminals doing this – let’s focus on them a bit more and start prosecuting them.

The UK’s National Cyber Security Centre has an excellent and pragmatic guide for enterprises using SMS: ‘Protecting SMS messages used in critical business processes‘.

Security change for good in the Internet of Things

Today marks the launch of the Code of Practice for Consumer IoT Security following a period of public consultation. You can find out more on the Department for Digital, Culture, Media & Sport’s (DCMS) website. The publication also means that the UK is now way ahead of the rest of the world in terms of leadership on improving IoT security and privacy.

As the original and lead author of the Code of Practice, I was really pleased to read the feedback and see that many other people feel the same way about improving the situation globally. I was able to discuss the feedback at length with colleagues from DCMS, the National Cyber Security Centre (NCSC) and other departments to ensure that we were creating a sensible measured set of guidance that took into account the needs and concerns of all stakeholders.

For further details on what the Code of Practice contains and why it exists, have a look at some of my previous blogs on this topic:

A number of other documents are being released today, all of which are well worth a read if you’re interested in this space.

Mapping Recommendations and Standards in the IoT security and privacy space

The thing that my team and I spent the most effort on over the summer period was mapping existing recommendations on IoT security and privacy from around the world against the Code of Practice. This was no mean feat and meant going through thousands of pages of pretty dry text. If you talk to anyone in the industry space, it is a job that everyone knew needed doing but nobody wanted to do it. Well I can say it is done now (thank you Ryan and Mark particularly!), but things like this are the never ending task. While we were working on it, new recommendations were being released and inevitably, just after we’d completed our work others were published. Equally, we ran the risk of mapping the entirety of the technical standards space. For now at least, we’ve stopped short of that and I think we’ve given implementers enough information such that they’ll be able to understand what commonalities there are across different bodies and where to look. I still am sufficiently sane to state that I’ll commit to keeping this updated, but we’ll let the initial dataset be used by companies first. Ultimately I’m hoping this is the tool that will aid defragmentation in the IoT security standards space and again I’ll continue to support this effort.

I’m really pleased that the government agreed with the suggestion that we should make the mappings available as open data. We’ve also created visual mappings just to make things a little more readable. All of this is hosted at https://iotsecuritymapping.uk which is now live.

Mapping recommendations to the UK’s Code of Practice for Consumer IoT Security

Talking about the Code of Practice

I also continued to spend time discussing what we were doing with various security researchers and presented at both B-SidesLV in Las Vegas and at 44con in London. I also spoke to a number of different industry groups to explain what we were doing and what is happening next.

Most IoT products v Skilled hackers

I often used this picture, partly because it is of my cat Pumpkin, partly because it illustrates the reality of most companies that are looking to digitise their products. Their new shiny connected products are on the left protected by not a lot, whilst the skilled attackers sit ready to pounce. The mobile industry has been in a cat and mouse game (stay with me here) with hackers and crackers for around 20 years now. Broadly speaking, the mobile device is a hard target and there are some great engineers working in product security across the mobile industry. Take then the washing machine industry, just as an example. What experience does a company that produces washing machines have in device and internet security? Very little is the answer. Startups are encouraged to ship unfinished products and there is a continued prevailing attitude that companies can get away with doing and spending very little on security. It is no surprise that these products are easily broken and cause consumers significant security and privacy harm, further degrading consumer trust overall in connected products.

No more. Change is here.

Consumers should be able to reject IoT products as not secure with these simple checks

There is a lot of support to get poorly secured products off the market. The recent research by Andrew Tierney on the TappLock is just another demonstration of some of the rubbish that is allowed to be sold. I was pleased to see that it had been de-listed by Amazon.

 

We still have some way to go – the public feedback on the UK government’s IoT security Code of Practice is being reviewed right now, then it’s onto next steps following the publication. My own personal feeling is that this is not about creating lots of new things in terms of security – much of what needs to be done is already written down and we know what good looks like. What it is a lot about is adoption and enforcement. How we check that runs us down the rabbit hole of how to do compliance (cf. the upcoming EU Cyber Security Act). I would prefer that we take the low-hanging fruit for now in order to improve things significantly and really quickly. The top three of the UK’s Code of Practice are easily testable. If you don’t have these as a vendor you’re going to be in serious trouble anyway, the rest almost don’t matter at that point.

How a consumer can check a product hasn’t been designed with security in mind

The good thing about this is that even a consumer could check those things – if any of these three things are missing, my view is don’t buy the product:

1) Does it have a default password? (I don’t want it).
2) Can people report security vulnerabilities to the manufacturer? (check website – no? I don’t want it).
3) Can I update the software and for a period that I know about? (No – I don’t want it).

I’ve described these things before as insecurity canaries – if the vendor is not adhering to some basic things that anyone can check, what does the rest of the product look like under the bonnet?

If you go to the /security page of Tapplock’s website you get a “coming soon” screen (yes I know, I thought that somewhat amusing too). So this already means I can’t easily report security vulnerabilities to them. To be fair, there is a lot of good practice out there that just needs adopting. There is a window of opportunity for IoT vendors and service providers to get it right before governments start bringing out the big stick. At the moment consumers are being defended by a small band of concerned security researchers who are demonstrating just how poorly secured some of these products really are.

Government Reports, IoT Security, Mirai and Regulation

I saw a misleading report yesterday from a security researcher who said that the UK’s Code of Practice on IoT security couldn’t have prevented something like Mirai. Luckily I had already written something that explains how Mirai would have been prevented: https://www.copperhorse.co.uk/how-the-uks-code-of-practice-on-iot-security-would-have-prevented-mirai

I urge everyone interested to read the Secure by Design report plus the guidance notes within to see where things are going, especially the points about future consideration of regulation; and to understand that the Code of Practice is outcome based, in order to make it easily measurable by say a consumer group, not just engineering people like me. During the development of the report a huge number of people were consulted, including a lot of the security research community who provided invaluable advice and input.

On standards – I believe there is no need for additional standards in this space (that’s not what the Code of Practice is), but there is a need for existing standards from a range of bodies to be mapped against the outcomes. What we actually need is vendors to actually adopt the existing security standards within their products and to help them understand the inter-relation between standards a bit better. Mappings can be used by vendors to achieve the desired outcome of securely designed products that retailers feel confident to sell.

So don’t believe everything the noisy people say for a soundbite on the news – make up your own mind. More importantly the report is open for public feedback until the 25th of April, so make your voices known!

A Code of Practice for Security in Consumer IoT Products and Services

 

Today is a good day. The UK government has launched its Secure by Design report and it marks a major step forward for the UK for Internet of Things (IoT) security.

Embedded within the report is a draft “Code of Practice for Security in Consumer IoT Products and Associated Services”, which I authored in collaboration with DCMS and with input and feedback from various parties including the ICO and the NCSC.

I have been a passionate advocate of strong product security since I worked at Panasonic and established the product security function in their mobile phone division, through to the mobile recommendations body OMTP where, as the mobile industry we established the basis of hardware security and trust for future devices. We’re certainly winning in the mobile space – devices are significantly harder to breach, despite being under constant attack. This isn’t because of one single thing; it is multiple aspects of security built on the experiences of previous platforms and products. As technologies have matured, we’ve been able to implement things like software updates more easily and to establish what good looks like. Other aspects such as learning how to interact with security researchers or the best architectures for separating computing processes have also been learned over time.

Carrying over product security fundamentals into IoT

This isn’t the case however for IoT products and services. It feels in some cases like we’re stepping back 20 years. Frustratingly for those of us who’ve been through the painful years, the solutions already exist in the mobile device world for many of the problems seen in modern, hacked IoT devices. They just haven’t been implemented in IoT. This also applies to the surrounding ecosystem of applications and services for IoT. Time and again, we’re seeing developer mistakes such as a lack of certificate validation in mobile applications for IoT, which are entirely avoidable.

There is nothing truly ground-breaking within the Code of Practice. It isn’t difficult to implement many of the measures, but what we’re saying is that enough is enough. It is time to start putting houses in order, because we just can’t tolerate bad practice any more. For too long, vendors have been shipping products which are fundamentally insecure because no attention has been paid to security design. We have a choice. We can either have a lowest common denominator approach to security or we can say “this is the bar and you must at least have these basics in place”. In 2018 it just simply isn’t acceptable to have things like default passwords and open ports. This is how stuff like Mirai happens. The guidance addresses those issues and had it been in place, the huge impact of Mirai would simply not have occurred. Now is the time to act before the situation gets worse and people get physically hurt. The prioritisation of the guidance was something we discussed at length. The top three of elimination of the practice of default passwords, providing security researchers with a way to disclose vulnerabilities and keeping software updated were based on the fact that addressing these elements in particular, as a priority, will have a huge beneficial impact on overall cyber security, creating a much more secure environment for consumers.

We’re not alone in saying this. Multiple governments and organisations around the world are concerned about IoT security and are publishing security recommendations to help. This includes the US’s NIST, Europe’s ENISA and organisations such as the GSMA and the IoT Security Foundation. I maintain a living list of IoT security guidance from around the world on this blog.

So in order to make things more secure and ultimately safer (because a lot of IoT is already potentially life-impacting), it’s time to step things up and get better. Many parts of the IoT supply chain are already doing a huge amount on security and for those organisations, they’re likely already meeting the guidance in the code of practice, but it is evident that a large number of products are failing even on the basics.

Insecurity Canaries

Measuring security is always difficult. This is why we decided to create an outcomes-based approach. What we want is the ability for retailers and other parts of the supply chain to be easily able to identify what bad looks like. For some of the basic things like eliminating default passwords or setting up ways for security researchers to contact in the case of vulnerabilities, these can probably be seen as insecurity canaries – if the basics aren’t in place, what about the more complex elements that are more difficult to see or to inspect?

Another reason to focus on outcomes was that we were very keen to avoid stifling creativity when it came to security solutions, so we’ve avoided being prescriptive other than to describe best practice approaches or where bad practices need to be eliminated.

The Future

I am looking forward to developing the work further based on the feedback from the informal consultation on the Code of Practice. I support the various standards and recommendations mapping exercises going on which will fundamentally make compliance a lot easier for companies around the world. I am proud to have worked with such a forward-thinking team on this project and look forward to contributing further in the future.

Additional Resources

I’ve also written about how the Code of Practice would have prevented major attacks on IoT:

Need to know where to go to find out about IoT security recommendations and standards?

Here’s a couple more things I’ve written on the subject of IoT security:

Mobile World Congress 2018

View of the new Fira when flying in to Barcelona

This week it is Mobile World Congress, the biggest event in the mobile industry calendar. If you’re interested in meeting for a chat or just hearing about mobile and IoT security & privacy, I’ll be at the following places!

Sunday 25th February
6th GSMA IoT Summit
13:00-17:30
NH Collection Barcelona Tower Hotel

Copper Horse annual security dinner at a secret location in Barcelona
21:00-late (tweet me or message if you want to come along)

Monday 26th February
4YFN – “Hidden Threats and Opportunities to my Business”
Panelist: “Spotlight – How Data and Cyber Security can make or break a new business?”
16:15-17:15
4YFN (at the old Fira), Fira Barcelona Montjuïc, Av. Reina Maria Cristina

Tuesday 27th February
IoT Tuesday, hosted by Cellusys, supported by JT Group and the IoT Security Foundation
17:00-late Cellusys event – I’ll be giving an opening talk on behalf of the IoT Security Foundation, which will be: “The Ticking Clock”: why security in IoT is critical to how you run your business.
Tweet me if you want to attend.

Wednesday 28th February
16:30-17:30
Why Should we Trust your Digital Security?
Me having a fireside chat in this session with Jean Gonie, VEON: Data, Consumer Protection and the GDPR
Auditorium 3, Hall 4 (on-site at MWC)

I’ll be at a few other events and will generally be around and about the MWC main site all week so please feel free to get in contact. Speaking of Barcelona, we’re holding our next training, “Foundations of IoT Security” in May in the city. More details and sign-up can be found on the IoTSF website.

 

The Real World is Not Real: Manipulation of Data and People in the Future Internet

On the 24th of January 2018 I gave my professorial lecture at York St John University in the UK. I decided to choose the subject of fakeness: the manipulation of data by attackers and its potential impact on people and machines if trusted. The transcript is below. I’ve added in additional links so you can see sources and informative links for further reading. I hope you enjoy the read, feel free to leave comments or tweet me.

The Real World is Not Real: Manipulation of Data and People in the Future Internet
David Rogers

Introduction

Thank you all for coming today to my inaugural lecture. I am honoured and humbled both to be in this position and that you are here today to listen. I’m also particularly grateful to Dr. Justin McKeown for engaging me in the first place with such a great university and for our Vice Chancellor Karen Stanton’s support.

We have been on a steady path towards more advanced artificial intelligence for a number of years now. Bots in the software sense have been around for a long time. They’ve been used in everything from online gaming to share dealing. The AI in computer chess games has been easily able to beat most users for many years. We still however have a long way to go towards full sentience and we don’t even fully understand what that is yet.

In the past couple of years we have seen the widespread use of both automation and rudimentary AIs in order to manipulate people, particularly it seems in elections.

Manipulation of people

I hate to use the term fake news, but it has taken its place in world parlance. The United States Senate is currently investigating the use of trolls by Russia in order to manipulate the course of the 2016 presidential election. Through the use of Twitter accounts and Facebook advertising, a concerted attempt was made to influence opinion.

Source: https://www.huffingtonpost.co.uk/entry/russian-trolls-fake-news_us_58dde6bae4b08194e3b8d5c4

Investigative journalist Carole Cadwalladar published a report in May 2017 entitled “The great British Brexit robbery: how our democracy was hijacked” which is an interesting read. While I should note that the report is the subject of legal complaints by Cambridge Analytica, there are certainly interesting questions to be asked as to why so much of the Leave campaigns’ money was ploughed into online targeting. The ICO is currently investigating how voters’ personal data is being captured and used in political campaigns.

Whatever the case, it seems that humans can easily be influenced and there are many academic papers on this subject.

It also seems that some technology companies have become unwittingly duped into and have actually profited from undermining free and fair elections by the manipulation of their targeted advertising at certain segments of populations. This represents a significant threat to democracy and is still ongoing right now where elections are taking place.

Fake news is not a new thing. In one example from the 2nd world war, the Belgian resistance created “Le Faux Soir” to replace the newspaper “Le Soir” and distributed slightly before the normal deliveries arrived. The faux Soir had lots of stories saying how badly the war was going for the Germans and nearly all of the copies were taken before anyone realised. Le Soir’s modern version has been attacked since then; by IS supporters in 2015 although like many hacks on media sites, it was more a form of defacement.

Source: https://jonmilitaria44.skyrock.com/2815465118-Le-FAUX-Soir.html

What I’m really interested in at the moment is the next stage of all of this; the manipulation of data and therefore machines that use or rely on it.

Disconnecting real world objects and modification of real world things

The Internet of Things or IoT is something that I take a particular interest in and I have spent a lot of time looking at different aspects of security. We’ve had lots of interesting attacks on IoT and the so-called cyber-physical world on everything from children’s toys to power plants.

Whilst some attacks are entirely logical and we generally threat model for them, the increasing connectedness of the world has meant that “lone wolf” attackers have the potential to amplify their attacks in such a way that they could not before. Some people used to ask me in talks about why people would want to do things like attack remote insulin pumps or pacemakers. I would respond with the question, “why do people put glass in baby food?”

The tampering and contamination of foods has happened on a regular basis. The psychology behind such attacks is surely one of the powerless gaining power; in those cases often causing a product recall, financial harm and embarrassment to a disgruntled employee’s company or, in the biggest case in the UK in 1989, to extort money from the manufacturer.

In the Internet of Things, you often hear about these types of threats in terms of catastrophic impact, for example “stopping all the cars on a motorway”, “all the diabetics in the US”, “anyone with a pacemaker” or “poisoning water supplies”. Threats are made up of a combination of Intent, Capability and Opportunity. Without any one of the three, an attack is unlikely to be successful. One side effect of the IoT is that it is giving a new breed of attackers opportunity.

Connecting my house to the internet gives someone in another country the theoretical ability to
watch webcams in my house – if you don’t believe me, have a look at a tool called Shodan (the capability) and search for webcams (your intent).

In January 2018, Munir Mohammed and his pharmacist partner were convicted of planning a terrorist attack. Of some concern, it was also reported that he had been researching Ricin and he had a job making sauces for ready meals for Tesco and Morrisons in a food factory. The media made a strong case that there was a link between his work there although there was no evidence to that effect. It does raise a frightening prospect however; the potential for an intent of a different sort – not the traditional type of disgruntled employee wishing to taint food and scare parents, but a motivation caused by radicalisation. An attacker of this nature is eminently more dangerous than the disgruntled employee.

The employee intends to harm the employer rather than a child (in most cases), whereas the terrorist actually intends to kill the people who consume the product through pure hatred (in the case of ISIS). It is therefore entirely conceivable that this could be a high impact, but low likelihood risk that needs to be considered in the Internet of Things.

There have been incidents pointing towards potentially catastrophic future events. In 2016, the US mobile network operator Verizon reported that a hacktivist group linked to Syria managed to change valves which controlled the levels of chemicals in tap water, releasing dangerous levels of the chemical into the water supply. The scary part of this is that anyone with an IP connection, in any part of the world, with the intent, capability and opportunity could theoretically execute this on a number of connected water treatment systems. So it becomes a different numbers game – the number of potential attackers rises dramatically (many with different motivations), as does the amount of system exposure.

Many of these industrial systems run on very old equipment which is not segregated and can easily be reached via the internet. There are other examples too – in 2014, the German Federal Office of Information Security published details of a hack which caused serious damage to a blast furnace in a steel mill, after a worker was compromised via a phishing email.

What I’m trying to say here is that people with ill intentions whether it be nation states or terrorists could and are in some cases attempting to attack real world infrastructure using our connectedness and technology to achieve their aims.

Inventing the real world

Most of the attacks we have talked about are targeting and modification of real world things via the internet – from the inside out. But what about from the outside in? What if the real world wasn’t real, as presented to humans and other systems?

A couple of weeks ago, Oobah Butler, posted his article on how he managed to make his fake restaurant “The Shed At Dulwich” the most popular restaurant in London. He had the idea after he had previously been paid £10 for each positive comment he gave real restaurants, transforming their fortunes. Eventually Oobah decided to open the shed for real for one night and ended up hiring in actors and serving up microwave meals to the guests – some of who tried to book again! The restaurant has been deleted from Tripadvisor now, but I managed to get a cached version of it as you can see.

The Shed at Dulwich (Google Cache)

Fake reviews are of course extremely common on the internet. A whole industry of “click farms” in low income countries has grown up to both generate clicks to generate advertising revenue and to provide 5 star reviews and comments of products, applications and services over the internet. These fake clicks have been provided by software “bots” and through virtual machine emulations of smartphones. As detection gets better, humans are engaged in this activity including providing text comments on blogs and reviews.

Source: https://www.mirror.co.uk/news/world-news/bizarre-click-farm-10000-phones-10419403

Major retail companies are employing AI “chatbots” to respond to Twitter or Facebook complaints, with customers engaging and becoming enraged by their responses when they get it wrong, not realising that they’re not talking to a human being. In the case of Microsoft’s machine learning chatbot Tay, it made racist and nazi comments in less than a day based on things that were said to it and the dataset it was using for training.

You can see that the impression of a real world is already being significantly harmed by automation controlled by people with less than fair intentions. But what about truly trying to fake real world data in order to make humans take action, or not take action when they need to?

Faking real world data

I will give you a simple example which I show to some of my IoT security students. There is a popular TV and movie trope about spoofing the information from CCTV cameras.

Tvtropes.org talks about two basic examples – “the polaroid punk” – where a picture is taken and put over the camera while the hero goes about his business (you may remember this from the Vatican scene in “Mission Impossible 3”) and the second being “the splice and dice” – looping camera footage (which you may remember from the film “Speed”).

In my version of this trope, instead of doing this with CCTV imagery, we use the same trope against IoT data going back to a console. The specific example I chose was Connected Agriculture. There are some very expensive crops such as Almonds or Pistachios that will lose their crop if starved of water and could potentially kill the trees. It can take between 5 and 12 years for new trees to produce so ir
rigation has to be managed carefully.

As farms increasingly rely on the Internet of Things to provide accurate sensing and measurement, as well as control, it is entirely conceivable that this could be attacked for a variety of reasons. An attack could involve quietly monitoring the data being sent by sensors back to the farm, then stealthily taking over that reporting with fake data. Once the hijack has taken place, the attacker could reduce the irrigation or stop it entirely, as long as the farmer is not aware. With data being sent back that looks real, it could be a long time before the attack is discovered and it may never be detected. It may not even need to take place at the sensor end, it may be enough to hijack the reporting console or IoT hub where the data is aggregated.

With thousands of acres of land to manage, there is an increasing reliance on the screen of reported data in the office, rather than direct inspection on the ground to corroborate that the data is indeed real. Food security is a real issue and it is well within the means of nation states to execute such attacks against other countries or for the attack to be a targeted against in order to manipulate the market for Almonds.

Of course this is just an example, but the integrity of eAgriculture is an example of something which can be directly linked to a nation’s security.

In the military and intelligence domain, countries around the world are taking a more active and forthright approach to cyber security rather than just defending against attacks. This is also known as “offensive cyber”. In December 2017, AFP reported that Colonel Robert Ryan of a Hawaii based US Cybercom combat team had said the cyber troops’ mission is slightly different to the army’s traditional mission. He said: “Not everything is destroy. How can I influence by non-kinetic means? How can I reach up and create confusion and gain control?”. The New York Times had previously reported that Cybercom had been able to imitate or alter Islamic State commanders’ messages so that they directed fighters to places where they could be hit by air strikes.

Just yesterday, the UK’s Defence Chief of General Staff, Nick Carter said that the UK needs to keep up with unorthodox, hybrid warfare encompassing cyber attacks.

From the attacks that have been attributed to nation states, many of these have attacked civilian infrastructure, some of them sophisticated, others not so.

A colleague at Oxford University, Ivan Martinovic has written about the issues with air traffic control data being modified in different types of systems. Many of these systems were created many years ago and the authors of a joint paper in 2016 on “Perception and Reality in Wireless Air Traffic Communications Security” describe the threat model as “comparatively naïve”. In the same paper, they asked both Pilots and Air Traffic Controllers, their positions in hypothetical scenarios. 10.75% of the pilots surveyed said they didn’t know the effect if wrong label indications were shown up on their in-flight Traffic Collision and Avoidance Systems screens.

These systems are designed to prevent mid-air collisions. 83.3% of air traffic controllers surveyed said there would be a major loss of situational awareness if information or whole targets were selectively missing from an air traffic control radar screen. It is not difficult to imagine the impact of removing planes from either pilot or air traffic controller screens or the alternative which is to flood them with data that doesn’t exist, particularly in a busy area like Heathrow and during bad weather. The panic and loss of awareness in situation like that could cause very severe events. Other such attacks have been theorised against similar systems for ships at sea.

The future danger that we face from this kind of underlying vulnerability is that the person at the computer or controls in the future will not be a person, it will be another computer.

Disconnection from the matrix

Maybe all it takes is an attack to disconnect us from the matrix entirely.

In December 2017, a number of warnings were aired in the media and at events that the Russians may try to digitally starve the UK by cutting the fibre optic cables that connect us to the rest of the world. This is not a unique threat to us; all the countries of the world rely on the interconnectedness of the internet. Some landlocked countries are really at the mercy of some of their neighbours when it comes to internet connections and it can be used as a political weapon in the same ways that access to water and the building of dams is and has been for a number of years.

The Battle for the Atlantic in 1941 was all about cutting off supplies to Britain through U-Boat warfare.

Telecommunications cable-cutting has long been an issue in warfare but for traditionally different reasons. Barbara Tuchman’s book, The Zimmermann Telegram explains that in 1914, on the eve of the Great War, sealed orders were opened at Cable & Wireless, directing them to cut and to remove a portion of submarine cable in the English Channel. This forced the Germans to use insecure wireless for their messaging.

The situation for submarine cables today is different. Our whole economies are dependent on the resilience of cable which is a few inches wide. The original UK National Cyber Security Strategy in 2011 stated that around 6% of UK GDP was generated by the internet, but this probably does not capture the true nature of how human beings live their lives today and the downstream reliance on everything from purchasing goods and services to supply chain management, communication and transportation as well as increasing government services for welfare and taxation; the list is almost endless. Each and every one of these touch points with the internet has an onward real world effect.

The back-end of many of these systems are cloud computing services, many of which are hosted in other countries with very little domestic UK infrastructure to support going it alone. Our reliance is based on a globalised world and it is increasing; just as the world powers shift to a more parochial, protectionist approach.

The concept of “digital sovereignty” is something that governments around the world are happy to promote because it makes them feel that they’re more in control. Indeed the Russians themselves had stress-tested their own networks to check their preparedness and resilience. They failed their own tests, setting themselves on a path to hosting everything in their own country and further balkanising the internet. Russia’s foreign policy in this respect is clear. Foreign Minister Sergey Lavrov has repeatedly stated that Russia wishes to see “control” of the internet through the UN. The country has a long-held paranoia about perceived western control of the internet and wishes to redress the balance of power.

Economic data, the 24 hour news cycle and market shocks

 

Source: https://www.telegraph.co.uk/finance/markets/10013768/Bogus-AP-tweet-about-explosion-at-the-White-House-wipes-billions-off-US-markets.html

In April 2013, the Associated Press’s Twitter account caused a brief, massive stock plunge apparently amounting to 90 billion pounds when it tweeted: “Breaking: Two Explosions in the White House and Barack Obama is injured.”.

The situation was quickly dealt with and the market recovered quickly as the White House immediately clarified the situation, but this demonstrates the impact and influence that a trusted news source can have and the damage that can happen when it is hacked. After the event, AP staff admitted that there had been a targeted phishing campaign against them but it is unclear who the attacker was. Other news organisations have also been regularly targeted in similar ways.

Many automated systems rely on Twitter as a source of “sentiment” which can be linked to machine learning algorithms for stock market forecasting with sentiment analysis. As such, the sell-off in many cases may have been automated, particularly as the “news polarity” would have been negative.

An interesting paper on the subject of news headlines and their relative sentiment was published in March 2015 by the University of Minas Gerais in Brazil and the Qatar Computing Research Institute. The paper serves as a good description of how scores can be attributed to headlines of breaking news article tweets as to its news polarity.

Source: https://arxiv.org/pdf/1503.07921.pdf

As an aside, they showed that around 65% of Daily Mail articles were negative, but I guess you knew that anyway!

Source: https://arxiv.org/pdf/1503.07921.pdf

Amplification of events by stock market algorithms have been shown in other events. The 2010 “Flash Crash” caused 500 million dollars of losses in around half an hour. Part of the reason for this was the trading of derivatives – ETFs – Exchange Traded Funds which form baskets of other assets. The value of the ETFs were disconnected from the underlying value of the assets.

Navinder Singh Sarao, also known as the “Hound of Hounslow” modified automated software to enable him to make and cancel trades at a high frequency which would drive prices in a particular direction. The US Justice department claimed he made 26 million pounds over 5 years. There was another Flash Crash in August 2015 caused by such high frequency trades. By this time, “Circuit breakers” had been installed – these caused the stock market to halt over 1200 times in one day.

The recent “This is not a drill” missile alert in Hawaii which was accidentally sent by an employee was quickly reversed, but not before many people had panicked. That event happened on a Saturday and it is likely that it would have had some impact on the financial markets had it been during the week. It is not difficult to imagine the havoc that could have been caused if such a system had been compromised through a cyber attack. Indeed it is likely that that this potential has been noted by North Korea.

Source: Twitter

Machine-led attacks

In 2016, at DEFCON in Las Vegas, which is the world’s biggest hacking conference, I was a witness to history. The DARPA Grand Challenge final of the world’s first all-machine hacking tournament took place using competing teams of supercomputers. Normally this kind of tournament takes place with competing humans attacking and defending systems. The key element of this is the fact that this represented a point in time in which everything changed. For years now we have seen the evolution of attacks with some level of machine assistance, but this was a whole new ball-game. The speed at which situations changed was beyond the rate at which humans could keep up, bearing in mind that these systems had not seen any of these attacks before.

In one case, a vulnerability known as “Heartbleed” was used and exploited to attack one of the DARPA Grand Challenge supercomputers. It was found, patched and used in an attack against the other machines in a short few minutes. All of this was possible because of the automation, machine learning and artificial intelligence in-built into each computer.

There is a defence aspect to this too. In the future, we need to consider how we best utilise “the machines” for defensive purposes. The technology available to us now and in the future will enable us to use machine learning at the edge of a network to provide intelligent intrusion prevention and security monitoring systems. Our attacks will largely be machine-led, which in a crude way happens now. One can imagine agents at the edge self-healing in a different
way – healing from security attacks and then getting that fix back and out to other elements of the network.

Team Shellphish who finished in third place, even open sourced their code so that other people could develop it further.

The problem for all of us is the constraints on such AIs. If we delegate the ability to respond offensively to agents that can respond more quickly than we can, what bounds do we put in place around that? Does the adversary play by the same ruleset? What if they break the rules? Research using AI to learn different types of games has often put rules in place to make the AI follow the rules of the game, rather than cheat (for example with DeepMind), but it could be programmed to break as many rules as possible in order to succeed, for example to get information from the underlying operating system to win the game.

Then we have AI versus humans. In some respects humans have the advantage – for example, we often have the ability to disconnect – to remove power or to disengage from a computer system. The AI has physical bounds but they may also be protected from attack by its designer.

As has been seen in games that have been learned by existing, adaptable artificial intelligence, once the rules are fully understood, AIs can easily beat human players.

In September 2017, Russian President Vladimir Putin stated in a speech that “Artificial intelligence is the future, not only for Russia, but for all humankind,”. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Elon Musk, the founder of Tesla and amongst a growing number of people concerned about the rise of AI and particularly about its military uses tweeted in response “Competition for AI superiority at national level mostly likely cause of WW3 imo”.

A letter was sent to the UN in August 2017 which was signed by the founders of 116 AI and robotics companies across the world called for the ban of autonomous weapons which would “permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”. It is a sad fact that an arms race is already underway, with some autonomous weapons already deployed.

Artificial intelligence will employed in many different domains, not just military and cyber.

AI in economics

In economics, artificial intelligence offers humanity some positives in that the complexities of particular problems can be addressed by systems, but all of this relies on good and valid input data. It also relies on the political viewpoint that is programmed into such systems. Transparency in such AIs is going to be crucial when it comes to implementation of policy.

The use of computers for economic modelling has a long history. If you’re interested in this topic, the documentary maker Adam Curtis has made some excellent programmes about it, including the series Pandora’s Box. Most of these attempts have failed – the soviet, paper-based planned system ended up with ludicrous situations such as trains full of goods going all the way to Vladivostok and being turned around and sent back to Moscow. In the United States, science and data based solutions to national problems such as the risk of nuclear war, via the RAND Corporation, using Von Neumann’s Game Theory and Wohlstetter’s “Fail-Safe” strategies, but they didn’t always work. Defence Secretary Robert McNamara’s reliance on data coming back from the ground in the Vietnam war undoubtedly led to bad decisions due to the lack of quality in the data. The same is true today. Datasets are often unreliable and inaccurate but much more so if they have been deliberately altered.

Revolutions will need to take place in the way in which data is gathered before government systems will ever be accurate and reliable; and to do so will also amount to a massive breach of citizen privacy. However, it could be the case that some corporations are already sat on much of the data. In 2008 Google announced in a paper in the journal Nature that they were able to “nowcast” or predict flu outbreaks based on searches for symptoms and medical products. They were able to accurately estimate this two weeks earlier than the US Center for Disease Control. Google Flu Trends ran for a few years, but was shown to fail in a big way in 2013. Corporate systems that operate on such sets of “big data” are often opaque. This can lead to deliberate bias or inaccurate outcomes. In Google’s case the AI they use has also recently been found to be racist in picture identification.

The UK’s Office of National Statistics operates under a Code of Practice for Official Statistics, which statistics must be compliant with (and is currently under revision). This is also consistent with the UN’s Fundamental Principles of Official Statistics and the European Code of Practice on the subject. It is designed such that the statistics that are produced are honest and can’t be manipulated to suit a particular government’s views or objectives.

In the future, governments will need to ensure that artificial intelligence based systems operating on the statistics to steer policy are compliant with such codes of practice, or a new codes adopted.

What effect would the manipulation of national statistics have? If governments were making decisions based on falsified or modified data, it could have a profound financial, economic and human impact. At the moment, the safeguards encoded in the Code of Practice rely on the integrity of individuals and their impartiality and objectivity.

As we increasingly rely on computers to make those recommendations, what assurance do we have that they operate in-line expectations and that they have not been compromised? In theory, a long-term adversary could create a completely different national picture entirely by slightly skewing input data to national statistics in particular areas meaning that they bear no relation to the real world.

When algorithms go wrong

AIs will co-exist with other AIs and this could generate some disturbing results.

During Hurricane Irma in September 2017, automatic pricing algorithms of airline tickets caused prices to rise in-line with demand, resulting in exorbitant demands on evacuating residents. Such “yield management systems” are not designed to handle such circumstances and have no concept of ethics or situational awareness. Airlines were forced to manually backtrack and cap prices manually after a twitter backlash.

Source: Twitter @LeighDow

In 2011, two third-party Amazon merchants ended up in an automated pricing war over a biology book because of competing algorithms. The pricing reached over 23.5 million dollars for a book that was worth about 100 dollars. The end results of such situations where there is interaction with multiple other algorithms which may rely on that algorithmic output data may be non-deterministic and may be catastrophic where physical real world effects take place based on these failures. Humanity could be put in a position where they cannot control the pace of these combined events and widespread disruption and destruction could take place in the real world.

Source: https://www.michaeleisen.org/blog/?p=358

The human rebellion

As machines start to take over complex functions such as driving, we have already started to see humans rebel.

Artist James Bridle demonstrated this by drawing an unbroken circle within a broken one, representing a line a car could cross. This created a situation which a self-driving car could theoretically enter but couldn’t get out of – a trap. This of course was art, but researchers at a group of US universities worked out how to trick sign-recognition algorithms simply by putting stickers on stop signs.

Source: https://www.autoblog.com/2017/08/04/self-driving-car-sign-hack-stickers/

It is certainly likely that this real-world gaming of artificial intelligence is going to happen and at least in the early days of AI, it will probably be easy to do.

Nudge and Dark Patterns

Governments around the world are particular fans of “nudge techniques”; imagine if they were able to design the AI to be adaptable with nudge – or social engineering as we call it in the security world? What I see with technology today is a lot of bad nudge – manipulation of smartphone or website users into behaviours that are not beneficial to them, but are beneficial to the system provider. Such techniques are widespread and are known as “dark patterns”, a concept originally conceived by user experience expert Harry Brignull. So what is to stop these techniques being inserted into or used against artificial intelligence in order to game the user or to nudge the system in a different direction? Some of this used to be known as “opinion control” and in countries like China the government is very keen to avoid situations like the Arab Spring which would bring chaos to the country. Maintaining stability and preventing free-thinking and freedom of expression are generally the order of the day and AI will assist them in their aims.

The danger of the Digital Object Architecture

One solution that has been proposed to the problem of falsified data on the internet is to use something called the Digital Object Architecture. In theory this technology would allow for the identification of every single object and allow for full traceability and accountability. It is difficult to imagine, but a future object-based internet could make the existing internet look minuscule.

It is a technology being promoted by a specialised UN agency called the ITU. They are meeting at this moment talking about it. The main proponents are authoritarian regimes such as China, Russia and Saudi Arabia. This is because as I mentioned earlier, it brings “control” to the internet. This is clearly the polar opposite of freedom. It is an extremely pervasive technology in theory, which is also largely untested.

Shortly before Robert Mugabe’s detention in Zimbabwe, there had been issues with food shortages and concerns about the state of the currency which had triggered panic buying and a currency crash. These issues were genuine, they were not false, but it certainly wasn’t convenient for Zimbabwe or Mugabe. He reportedly announced at a bi-national meeting with South Africa that cyber technology had been abused to “undermine our economies”. The South African communications minister clarified that Mugabe intended to use the Digital Object Architecture. The Minister said “It helps us identify the bad guys. The Zimbabweans are interested in using this.”.

So if we look at the countries and fans of this technology, it seems that it is about controlling messages and about controlling truth. The Digital Object Architecture is being heavily promoted by some countries around the world that appear to both fear their own people as the way to maintain order and control and to repress free-thinking and freedom of expression. This is quite a depressing situation – the technology itself is not necessarily the problem, it is the way that it is used, who controls it and the fact that it affords no ability for privacy or anonymity for citizens.

So we must seek other solutions that do maintain the properties of confidentiality, integrity and authenticity of data in order to support widespread artificial intelligence use in economies, but technology that cannot be abused to repress human individuality and freedom. I have to mention Blockchain here. I have read a number of articles that attempt to paint blockchains as the perfect solution to some of the problems I’ve outlined here in relation to creating some kind of federated trust across sensors and IoT systems. The problems are very real, but I feel that the use of blockchains to solve some of these problems creates a number of very different,
other problems including scalability so they are by no means a panacea.

Conclusion

If we can’t trust governments around world not to abuse their own citizens, and if those same governments are pushing ahead with artificial intelligence to retain and gain greater power, how can we possibly keep AI in check?

Are we inevitably on the path to the destruction of humanity? Even if governments choose to regulate and make international agreements over the use of weaponry, there will always be states that choose to ignore that and make their artificial intelligence break the rules – it would be a battle between a completely ruthless AI and one with a metaphorical hand tied behind its back.

In an age where attribution of cyber attacks is extremely difficult, how can we develop systems that both prevent the gaming of automated systems through “false flag” techniques, masquerading as trusted friends but meddling and influencing decision making? It is entirely possible that attribution may be even more difficult due to the origins of systemic failure in economies being difficult to detect.

How do we manage the “balance of terror” (to use a nuclear age term) between nation states? What will happen when we cannot rely on anything in the world being real?

These are the problems that we need to address now as a collective global society, to retain confidence and trust in data and also the sensors and mechanisms that gather such data. There could be many possible algorithmic straws to break the camel’s back.

Artificial Intelligence is here to stay, but ethical and humane cyber security and robust engineering discipline is a part of keeping it sane. Maybe, just maybe, artificial intelligence will be used for the good of humanity and solving the world’s problems rather than for the last war we’ll ever have.

Thank you.

The future of humanity depends on us getting security right in the Internet of Things



There isn’t a day that goes by now without another Internet of Things (IoT) security story. The details are lurid, the attacks look new and the tech is well, woeful. You would be forgiven for thinking that nobody is doing anything about security and that nothing can be done, it’s all broken.

What doesn’t usually reach the press is what has been happening in the background from a defensive security perspective. Some industries have been doing security increasingly well for a long time. The mobile industry has been under constant attack since the late 1990s. As mobile technology and its uses have advanced, so has the necessity of security invention and innovation. Some really useful techniques and methods have been developed which could and should be transferred into the IoT world to help defend against known and future attacks. My own company is running an Introduction to IoT Security training course for those of you who are interested. There is of course a lot of crossover between mobile and the rest of IoT. Much of the world’s IoT communications will transit mobile networks and many mobile applications and devices will interact with IoT networks, end-point devices and hubs. The devices themselves often have chips designed by the same companies and software which is often very similar.

The Internet of Things is developing at an incredible rate and there are many competing proprietary standards in different elements of systems and in different industries. It is extremely unlikely there is going to be one winner or one unified standard – and why should there be? It is perfectly possible for connected devices to communicate using the network and equipment that is right for that solution. It is true that as the market settles down some solutions will fall by the wayside and others will consolidate, but we’re really not at that stage yet and won’t be for some time. Quite honestly, many industries are still trying to work out what is actually meant by the Internet of Things and whether it is going to be beneficial to them or not. 

What does good look like?

What we do know is what we don’t want. We have many lessons from near computing history that we ignore and neglect security at our peril. The combined efforts and experiences of technology companies that spend time defending their product security, as well as those of the security research community, so often painted as the bad guys; “the hackers” have also significantly informed what good looks like. It is down to implementers to actually listen to this advice and make sure they follow it.

We know that opening the door to reports about vulnerabilities in technology products leads to fixes which bring about overall industry improvements in security. Respect on both sides has been gained through the use of Coordinated Vulnerability Disclosure (CVD) schemes by companies and now even across whole industries.

We know that regular software updates, whilst a pain to establish and maintain are one of the best preventative and protective measures we can take against attackers, shutting the door on potential avenues for exploitation whilst closing down the window of exposure time to a point where it is worthless for an attacker to even begin the research process of creating an attack.

Industry-driven recommendations and standards on IoT security have begun to emerge in the past five years. Not only that, the various bodies are interacting with one another and acting pragmatically; where a standard exists there appears to be a willingness to endorse it and move onto areas that need fixing.

Spanning the verticals

There is a huge challenge which is particularly unique to IoT and that is the diversity of uses for the various technologies and the huge number of disparate industries they span. The car industry has its own standards bodies and has to carefully consider safety aspects, as does the healthcare industry. These industries and also the government regulatory bodies related to them all differ in their own ways. One unifying topic is security and it is now so critically important that we get it right across all industries. With every person in the world connected, the alternative of sitting back and hoping for the best is to risk the future of humanity.

Links to recommendations on IoT security

To pick some highlights – (full disclosure – I’m involved in the first two) the following bodies have created some excellent recommendations around IoT security and continue to do so:

IoT Security Foundation Best Practice Guidelines
GSMA IoT Security Guidelines
Industrial Internet Consortium 

The whole space is absolutely huge, but I should also mention the incredible work of the IETF (Internet Engineering Task Force) and 3GPP (the mobile standards body for 5G) to bring detailed bit-level standards to reality and ensure they are secure. Organisations like the NTIA (the US National Telecommunications and Information Administration), the DHS (US Department for Homeland Security) and AIOTI (The EU Alliance for Internet of Things Innovation) have all been doing a great job helping to drive leadership on different elements of th
ese topics.


I maintain a list of IoT security resources and recommendations on this post.