I could get that data quicker by carrier pigeon!

Regular readers of my tweets may have seen a couple of carrier pigeon ones. I don’t have any particular interest in carrier pigeons, but it is randomly interesting to see how data transfer can be done in other ways and how it compares with traditional online methods.

Image source: https://cuteoverload.com/2010/04/14/those-carrier-pigeons-just-get-smarter-and-smarter/

I was first inspired to think about this by South African Kevin Rolfe’s protest against slow download speeds from his company’s ISP in 2009. He flew a carrier pigeon carrying a 4GB memory stick thus beating the equivalent download.

As memory gets cheaper and smaller, it is true to say that the volume of data that can be transmitted over the average home broadband connection is not getting much better, particularly in rural areas. In some places in the UK, it is probably better to get a wireless 4G contract if coverage permits.

Anyway, back to Pigeons. The Internet Engineering Task Force (IETF) have a couple of spoof RFCs which define a standard for IP over Avian Carrier (RFC1149); the revised version adding Quality of Service too (RFC2549).

Physical Payload

Calculating the data payload is relatively easy as you can see:

Payload of a carrier pigeon. This Reddit thread says 75g. Being unscientific as we are, we’ll go with that.

  • Weight of normal sized SD: 2 grams
  • Weight of microSD: 0.4g +/- 0.1g

So basically physically we’re talking:

  • 37 full sized SD cards (with 1/2 a full sized one to spare)
  • 187 microSDs (with 1/2 a microSD to spare)

I’m not sure exactly how these would be bundled up, I’ll leave that to a Pigeon expert (which I am not).

Data Payload

This is changing on a regular basis as new SDs get released, but here are some examples:

Therefore: 187 microSDs x 2TB = 374TB Data Payload
 
Calculating Other Internet Speed Related Stats…

So let’s try and make some comparison to download speeds. I may not be right with these aspects, so please feel free to correct me in the comments and I will revise the blog.

We need to work out how fast a pigeon can go. A racing pigeon can fly up to 400 miles at an average of 92.5mph apparently. I don’t like to reference the Daily Mail but here we go. As an interesting factoid, the fastest homing pigeon is allegedly the very expensive Bolt, who sold for £300,000 at auction.

Using another reliable source (Stack Overflow), let’s work out packet time from latency and bandwidth. Using the data payload example above:

Bandwidth = Payload (374TB)

Latency = Total Time (see below)

Throughput over 400 miles (at 92.5mph)

= 4hrs, 19mins, 27.56 seconds

= 14400 + 1140 + 27.56 = 15567.56 seconds

Data / Time

= 374TB / 15567.56

Bytes / Seconds

= 3.74e+14 / 15567.56 = 24,024,317,234 bytes per second

= 24.024 GB per second

Converted to Gbps = 24.024 / 0.125 conversion (according to this site)

= 192.192 Gbps speed (woo spooky homing pigeon IP joke in here somewhere!)

Packet Loss

Whilst we’re probably not likely to drop individual data packets, we may drop the whole lot. The risk of catastrophic failure is pretty high when it comes to the pigeon. Carrier pigeons were used in hostile environments quite a lot in both world wars. 32 pigeons have been awarded the “animal VC”, the Dickin medal.

A hostile pigeon environment. Source: Wikimedia Commons: https://en.wikipedia.org/wiki/War_pigeon#/media/File:Shooting_Homing_Pigeons.png

There are some slightly safer examples. This book claims: “Out of 300 released between 53 and 73 got to Paris”. So if we take the median point of that claim (63), then we have 21% success. That is the optimistic statement. That also means we have a 79% chance of total data loss!

Summary

I didn’t consider the time that data might take to load onto a computer – I assumed instant access but obviously that wouldn’t be the case.

It isn’t likely that the world is going to
start using carrier pigeons to transmit all their data, but what it does demonstrate is a viable offline mechanism for data transfer that doesn’t involve wires or antennae. The fact that I wrote it entirely on the train whilst connected (in and out, but mostly in) is quite a nice feature of the modern world, for me anyway. However, the internet and web isn’t architected for such large latency scenarios and the offline web appears to be neglected by most of the big information companies who seemingly would rather you accessed the data when they can gather data about you. Perhaps it might be useful in the interplanetary/galactic internet/web as a catch-up mechanism – dump a large offline copy onto the next ship going up a space station or planet.

Anyway, I hope you enjoyed the read and would welcome your comments!

 

IoT Security Resources

This is a list of useful documentation and links for anyone interested in IoT security, either for building products or as general reference material. The list is alphabetical and doesn’t denote any priority. I’ll maintain this and update it as new documentation gets published. Please feel free to add links in the comments and I will add them to the list.

Privacy-specific:

Additional papers and analysis of interest:

With special thanks to Mike Horton, Mohit Sethi, Ryan Ng and those others who have contributed or have been collecting these links on other sites, including Bruce Schneier and Marin Ivezic.

Updates:

16th July 2019: Added NIST, W3C, CSDE, IOTAC, OCF and PSA Certified

01st July 2019: Added multiple CCDS, NIST, NISC, ioXt, Internet Society, ENISA, Zachary Crockett, founder and CTO of Particle, Mozilla, IRTF, IoT Security Foundation, CTIA, Bipartisan Group, Trustonic, DIN and European Union

28th August 2018: Added [GDPR] Article 29 Data Protection Working Party, multiple AIOTI links, Atlantic Council, CableLabs, CSA, Dutch Cyber Security Council, ENISA links, European Commission and AIOTI  report, IEEE, IERC, Intel, IEC, multiple IETF links, IRTF, ISOC, IoTSF, ISO/IEC JTC 1 report, Microsoft links, MIT, NTIA, CSCC, OECD links, Ofcom, OWASP, SIAA, SAFECode links, TIA, U.S. Department of Homeland Security and US Senate

3rd July 2018: Updated broken OneM2M report, GSMA IoT security assessment, AIOTI policy doc and IETF guidance links.

6th March 2018: Added NIST draft report on cybersecurity standardisation in IoT.

14th February 2018: Added IoTSI, NIST and IRTF additional links.

1st February 2018: Updated with the following organisations: ENISA, IoT Alliance Australia, ISAC, New York City, NTIA, Online Trust Alliance, OneM2M, OWASP, Smart Card Alliance, US Food & Drug Administration. Added additional papers section.

24th April 2017: Added additional IoTSF links.

5th December 2016: Added GSMA, Nominet and OLSWANG IoT privacy links as well as AIOTI security link.

24th November 2016: Added GSMA self-assessment checklist, Cloud Security Alliance research paper, Symantec paper and AT&T CEO’s guide.

Dead on Arrival? What’s next for IoT security?

IoT security is in the news again and it is pretty grim reading. The DynDNS distributed denial of service (DDoS) attack caused many major websites to go offline. Let’s be clear – there are many security companies who have suddenly dumped all the insecure webcams and routers that have been out there for years into the new world of the Internet of Things. It is semantic perhaps, but I think somewhat opportunistic because much of the kit is older and generally not your new-to-market IoT products. There is however a big issue with insecure IoT products being sold and if not today, tomorrow will bring further, much worse attacks using compromised IoT devices across the world.

We’re at the stage where we’re connecting more physical things and those things are often quite weak from a security point of view. It appears that it has only just occurred to some people that these devices can be harnessed to perform coordinated attacks on services companies and people rely on (or individuals in the case of Brian Krebs).

I fully agree with Bruce Schneier and others who have said that this is one area where government needs to step in and mandate that security needs to be baked in rather than half-baked. The market isn’t going to sort itself out any time soon, but mitigation, both technical and non-technical can be taken in the interim. This does not mean that I am expecting marks or stickers on products (they don’t work).

There are some quite straightforward measures that can be requested before a device is sold and some standards and recommendations and physical technology is available to create secure products. Some of the vulnerabilities are simply unforgivable in 2016 and the competence of these companies to be able to sell internet connected products at all has to be questioned. Those of us who are in industry often see the same companies time and time again and yet nothing ever really happens to them – they still go on selling products with horribly poor levels of security. The Mirai botnet code released in September targets connected devices such as routers and surveillance cameras because they have default passwords that have not been changed by the user / owner of the device. We all know what they are: admin, admin / admin, password and so on. https://www.routerpasswords.com/ has a good list. With Mirai, the devices are telnetted into on port 23 and hey presto, turned around for attack.

I did notice that there is an outstanding bug in the Mirai code to be resolved however, on github: “Bug: Fails to destroy the Internet #8”

Your company has to have a security mindset if you are creating a connected product. Every engineer in your organisation has to have security in mind. It is often easy to spot the companies that don’t if you know what you are looking for.

Is there another way?

At the grandly titled World Telecommunications Standardization Assembly (WTSA) starting next week in Tunisia, many countries are attempting to go further and introduce an alternative form of information management based around objects at the International Telecommunication Union (ITU) (the so-called Digital Object Architecture (DOA) technology). Some want this to be mandated for IoT. It is worth having a look at what is being proposed because we are told that the Digital Object Architecture is both secure and private. Great, surely this is what we need to help us? Yet, when we dive a bit deeper, that doesn’t seem to be the case at all. I won’t give chapter and verse here, but I’ll point to a couple of indicators:

According to information handle.net, the DOA relies on proprietary software for the handle system which resolves digital object identifiers. Version 8.1 released in 2016 has some information at: https://www.handle.net/download_hnr.html where we discover that:

• Version 8 will run on most platforms with Java 6 or higher.

A quick internet search reveals that Java 6 was released in 2006 and reveals plenty of issues. For example “Java 6 users vulnerable to zero day flaw, security experts warn” from 2013. This excerpt from the articles states “While Java 6 users remain vulnerable, the bug has been patched in Java 7. Java 6 has been retired, which means that updates are only available to paying clients.”

Another quick internet search discovers “cordra.org”. Cordra is described “as a core part of CNRI’s Digital Object Architecture”. In the technical manual from January 2016 on that site, we find information on default passwords (login: admin, password: changeit).

“Cordra – a core part of the Digital Object Architecture” – default passwords

If it looks bad, it usually is.

These things are like canaries – once you see them you end up asking more questions about what kinds of architectural security issues and vulnerabilities this software contains. What security evaluation has any of this stuff been through and who are the developers? Who has tested it at all? I’ll come back to the privacy bit at a future date.

The Digital Object Architecture is not secure.

Don’t kid yourself that the DOA is going to be any more resilient than our existing internet – the documentation also shows it is based on the same technologies we rely on for our existing internet: PKI based security, relying on encryption algorithms that have to be deprecated and replaced when it gets broken. I’m not sure how it would hold up against a DDoS attack of any sort. What this object based internet seems to give us though is a license. There are many interesting parts to it, including that it seems that CNRI can now kill the DOA at will just by terminating the license:

“Termination: This License Agreement may be terminated, at CNRI’s sole discretion, upon a material breach of its terms and conditions by Licensee.”

So would I use this for the Internet of Things?
No! I’ve touched the tip of the iceberg here. It seems fragile and flaky at best, probably non-functioning at worst. Let’s be honest – the technology has not been tested at scale, it currently has to deal with a small 100s of thousands of resolutions, rather than the billions the internet has to. I can’t imagine that it would have been able to handle “1.2 terabits per second of data“. Operating at internet scale is a whole different ball game and this is what some people just don’t get – incidentally the IETF members pointed this out to CNRI researchers back in the early 2000s on the IETF mailing lists (I will try to dig out the link at some point to add here).

Summary

Yes, we need to get better, but let’s first work together and get on the case with device security. We also need to get better at sinkholing and dropping traffic which can flood networks through various different means, including future measures such as protocol re-design. Some people have said to just block port 23 as an immediate measure (blocking telnet access). There’ll be many future attacks that really do use the Internet of Things but that doesn’t mean we have to tear up our existing internet to provide an even less secure, untested version with the DOA. The grass is not always greener on the other side.

Some more links to recommendations on IoT security can be found below:

Other bodies are also doing work on security but at an earlier stage including the W3C’s Web of Things working group

Edit: 30/10/16 – typos and added IETF list


Improving Anti-Theft Measures for Mobile Devices

I’m pleased to say that the latest version of the GSMA SG.24 Anti-Theft Device Feature Requirements has been published. Many members of the Device Security Group I chair at the GSMA have been personally committed to trying to reduce the problem of mobile theft over many years. This represents just one small part of these continued efforts.

There is no magic solution to the problem of mobile theft as I’ve discussed many times (some listed below). The pragmatic approach we’ve taken is to openly discuss this work with all the interested parties including OS vendors such as Apple, Google and Microsoft as well as to reach out to Police and government particularly in the US and the UK where the subject has been of high interest. We’ve taken their feedback and incorporated it into the work. Everyone has a part to play in reducing theft of mobile devices, not least the owner of the device itself.

Some extra resources:

Some previous blogs on mobile theft:

Introducing the work of the IoT Security Foundation

At Mobile World Congress this year, I agreed to give an interview introducing the IoT Security Foundation to Latin American audiences. If you’re interested in IoT security and our work at the Foundation, you should find this video interesting. Enjoy!

IoT Security from Rafael A. Junquera on Vimeo.

 

Improving IoT Security

I am involved in a few initiatives aimed at improving IoT security. My company wrote the original IoT security strategy for the GSMA and we have been involved ever since, culminating in the publication of a set of IoT Security Guidelines which can be used by device manufacturers through to solution providers and network operators. Here’s a short video featuring me and other industry security experts explaining what we’re doing.

There’s still a long way to go with IoT security and we’ve still got to change the “do nothing” or “it’s not our problem” mindset around big topics like safety when it comes to the cyber physical world. Each step we take along the road is one step closer to better security in IoT and these documents represent a huge leap forward.

IoT Security and Privacy – Sleep-Walking into a Living Nightmare?

This is my remote presentation to the IoT Edinburgh event from the 24th of March 2016. It was a short talk and if you want to follow the slides, they’re also embedded below. The talk doesn’t cover much technical detail but is hopefully an interesting introduction to the topic.

There is a much longer version of the connected home talk that goes into much more depth (and talks about how we solve it). I hope to record and upload that at some point! Slides for this one:

Victim blaming when it comes to fraud

I was quoted today in a Guardian article after the Metropolitan Police Commissioner, Sir Bernard Hogan-Howe suggested that fraud victims should not be compensated by banks in cyber crime situations.

Image of what people are being conditioned to think a cyber criminal looks like! (Or perhaps I should have gone with hacker in hoodie?!)

His point is that people use weak passwords and don’t upgrade their systems so end up as easy pickings for online criminals. Whilst of course users need to take responsibility for their own actions (or inaction) it is nowhere near as simple as that, especially when it comes to things like deliberate social engineering of people and website insecurity.

My full quote was as follows: “I think the Met Chief’s comments are short-sighted. There are many reasons consumers are defrauded and a lot of those are not really things that they can control. To trivialise these to all being about user concerns misses the point. How does a consumer control the theft of their data from a website for example? We all have a role to play and a lot of work is underway in bodies like the worldwide web consortium (W3C) to reduce the use of passwords and to increase the use of hardware-backed security. The banks are doing a good job in a difficult environment but they are ultimately responsible for identifying and preventing fraud issues when they occur.”

The W3C’s work on web authentication is underway, which will standardise the work of the FIDO Alliance for the web in order to help eliminate the password. This of course will take a while and we won’t fully eliminate passwords from the web for many years. To further protect consumers, there is another effort to bring hardware security backing to important elements of the web, this will also hopefully be chartered to do that in W3C. In the software updates world, Microsoft have led the way on desktops and Apple in mobile for ensuring people are patched quickly and effectively. We still have a long way to go and I’m leading some work in the mobile industry, through the GSMA to try and make things better.

The Met and the wider police have a key role in investigating cyber crime, something they’ve not done well at all over the past few years, so they have failed consumers repeatedly. Blaming users is something akin to throwing stones in glasshouses.

When the “Apple Encryption Issue” reached Piers Morgan

How can we have an intelligent and reasoned debate about mobile device forensics?

I woke up early this morning after getting back late from this year’s Mobile World Congress in Barcelona. It has been a long week and I’ve been moderating and speaking at various events on cyber security and encryption throughout the week. It won’t have escaped anyone’s notice that the “Apple encryption issue” as everyone seems to have referred to it, has been at the top of the news and I have been asked what I think pretty much every day this week. Late last night, I’d seen a twitter spat kicking off between comedy writer and director Graham Linehan and Piers Morgan on the topic, but went to bed, exhausted from the week.

It was still being talked about this morning. My friend Pat Walshe who is one of the world’s leading mobile industry privacy specialists, had quoted a tweet from Piers Morgan:

Ironically, Piers Morgan himself has been accused of overseeing the hacking of phones, something which he has repeatedly denied, despite Mirror Group Newspapers admitting that some stories may have been obtained by illegal means during his tenure and having recently paid compensation to victims of phone (voicemail) hacking, a topic about which I have written in the past.

This week I’ll be up at York St John University where they’ve asked me to teach cyber security to their undergraduate computer scientists. The reason I agreed to teach there was because they highly value ethical concerns, something which I will be weaving into all our discussions this week. The biggest question these students will have this week will be the “what would you do?” scenario in relation to the San Bernadino case.

The truth is, this is not a question of technology engineering and encryption, it is a question of policy and what we as a society want and expect.

The moral aspects have been widely debated with Apple’s Tim Cook bringing, in my view, the debate to a distasteful low by somehow linking the issue to cancer. I’ve tried to stay out of the debate up until now because it has become a circus of people who don’t understand the technical aspects pontificating about how easy it is to break into devices versus encryption activists who won’t accept anything less than “encrypt all the things” (some of whom also don’t understand the technical bits). I sincerely hope that there isn’t a backlash on me here from either side for just voicing an opinion, some friends of mine have deliberately stayed quiet because of this – I’m exercising my right to free speech and I hope people respect that.

The truth is, this is not a question of technology engineering and encryption, it is a question of policy and what we as a society want and expect. If a member of my family is murdered do I expect the police to be able to do their job and investigate everything that was on that person’s phone? Absolutely. Conversely, if I was accused of a crime that I didn’t commit and I wasn’t in a position to handover the password (see Matthew Green’s muddy puddle test), would I also want them to do it? Of course. It is called justice.

Dealing with the world as it is

The mobile phones and digital devices of today replace all of our previous scraps of notepaper, letters, diaries, pictures etc that would have been left around our lives. If someone is murdered or something horrific happens to someone, this information could be used to enable the lawful investigation of a crime. The Scenes of Crime Officer of the past and defence team would have examined all of these items and ultimately present the evidence in court, contributing to a case for or against. Now consider today’s world. Everything is on our phone – our diaries and notes are digital, our pictures are on our phones, our letters are emails or WhatsApp messages. So in the case of the scene of a crime, the police may literally be faced with a body and a phone. How is the crime solved and how is justice done? The digital forensic data is the case.

Remember, someone who has actually committed a crime is probably going to say they didn’t do it. The phone data itself is usually more reliable than witnesses and defendant testimony in telling the story of what actually happened and criminals know that. I’ve been involved with digital forensics for mobile devices in the past and have seen first-hand the conviction of criminals who continually denied having committed a serious crime, despite their phone data stating otherwise. This has brought redress to their victim’s families and brought justice for someone who can no longer speak.

There is no easy answer

On the other side of course, we’re carrying these objects around with us every day and the information can be intensely private. We don’t want criminals or strangers to steal that information. The counter-argument is that the mechanisms and methods to facilitate access to encrypted material would fall into the hands of the bad guys. And this is the challenge we face – there is absolutely no easy answer to this. People are also worried that authoritarian regimes will use the same tools to help further oppress their citizens and make it easier for the state to set people up. Sadly I think that is going to happen anyway in some of those places, with or without this issue being in play.

US companies are also fighting hard to sell products globally and they need to recover their export position following the Snowden revelations. It is in their business interests to be seen to fight these orders in order to s
ell product. It appears that Tim Cook wants to reinforce Apple’s privacy marketing message through this fight. Other less scrupulous countries are probably rubbing their hands in glee watching this show, whilst locally banning encryption, knowing that they’ll continue doing that and attempting to block US-made technology whatever the outcome of the case.

Hacking around

Even now, I have seen tweets from iPhone hackers who are more than capable of an attempt to solve this current case and no doubt they would gain significant amounts financially from doing so – because the method that they develop could potentially be transferable.

This is the same battle that my colleagues in the mobile world fight on a daily basis – a hole is found and exploited and we fix it; a continual technological arms race to see who can do the better job. Piers Morgan has a point, just badly put – given enough time, effort and money the San Bernadino device and encryption could be broken into – it will just be a hell of a lot. It won’t be broken by a guy in a shop on Tottenham Court Road (see my talk on the history of mobile phone hacking to understand this a bit more).

Something that has not been discussed is that we also have a ludicrous situation now whereby private forensic companies seem to be ‘developing’ methods to get into mobile handsets when in actual fact many of them will either re-package hacking and rooting tools and pass them off as their own solutions, as well as purchasing from black and grey markets for exploits, at premium prices. This is very frustrating for the mobile industry as it contributes to security problems. Meanwhile, the Police are being forced to try and do their jobs with not just one hand tied behind their back, it now seems like two. So what should we do about that? What do we consider to be “forensically certified” if the tools are based on fairly dirty hacks?

How do we solve the problem?

We as democratic societies ask and expect our Police forces to be able to investigate crimes under a legal framework that we all accept via the people we elect to Parliament or Senate. If the law needs to be tested, then that should happen through a court – which is exactly what is happening now in the US. What we’re seeing is democracy in action, it’s just messy but at least people in the US and the UK have that option. Many people around the world do not.

On the technical side, we will need to also consider that there are also a multitude of connected devices coming to the market for smart homes, connected cars and things we haven’t even thought of yet as part of the rapidly increasing “Internet of Things”. I hate to say it, but in the future, digital forensics is going to become ever more complex and perhaps the privacy issues for individuals will centre on what a few large technology companies are doing behind your back with your own data rather than the Police trying to do their job with a legal warrant. Other companies need to be ready to step up to ensure consumers are not the product.

I don’t have a clear solution to the overall issue of encrypted devices and I don’t think you’ll thank me for writing another thousand words on the topic of key escrow. Most of the time I respond to people by saying it is significantly complex. The issues we are wrestling with now do need to be debated, but that debate needs to be intellectually sound and unfortunately we are hearing a lot from people with loud voices, but less from the people who really understand. The students I’m meeting next week will be not only our future engineers, but possibly future leaders of companies and even politicians so it is important that they understand every angle. It will also be their future and every other young person’s that matters in the final decision over San Bernadino.

Personally, I just hope that I don’t keep getting angry and end up sat in my dressing gown until lunchtime writing about tweets I saw at breakfast time.

The Future of Cyber Security and Cyber Crime

David Wood kindly invited me to speak at the London Futurists cyber security and cyber crime event along with Craig Heath and Chris Monteiro. I decided to talk about some more future looking topics than I normally do, which was quite nice to do. The talks were videoed and linked below (my talk starts about 39:29). I should add that the Treaty of Westphalia was 1648, not 1642!:

Here are my slides: