When the “Apple Encryption Issue” reached Piers Morgan

How can we have an intelligent and reasoned debate about mobile device forensics?

I woke up early this morning after getting back late from this year’s Mobile World Congress in Barcelona. It has been a long week and I’ve been moderating and speaking at various events on cyber security and encryption throughout the week. It won’t have escaped anyone’s notice that the “Apple encryption issue” as everyone seems to have referred to it, has been at the top of the news and I have been asked what I think pretty much every day this week. Late last night, I’d seen a twitter spat kicking off between comedy writer and director Graham Linehan and Piers Morgan on the topic, but went to bed, exhausted from the week.

It was still being talked about this morning. My friend Pat Walshe who is one of the world’s leading mobile industry privacy specialists, had quoted a tweet from Piers Morgan:

Ironically, Piers Morgan himself has been accused of overseeing the hacking of phones, something which he has repeatedly denied, despite Mirror Group Newspapers admitting that some stories may have been obtained by illegal means during his tenure and having recently paid compensation to victims of phone (voicemail) hacking, a topic about which I have written in the past.

This week I’ll be up at York St John University where they’ve asked me to teach cyber security to their undergraduate computer scientists. The reason I agreed to teach there was because they highly value ethical concerns, something which I will be weaving into all our discussions this week. The biggest question these students will have this week will be the “what would you do?” scenario in relation to the San Bernadino case.

The truth is, this is not a question of technology engineering and encryption, it is a question of policy and what we as a society want and expect.

The moral aspects have been widely debated with Apple’s Tim Cook bringing, in my view, the debate to a distasteful low by somehow linking the issue to cancer. I’ve tried to stay out of the debate up until now because it has become a circus of people who don’t understand the technical aspects pontificating about how easy it is to break into devices versus encryption activists who won’t accept anything less than “encrypt all the things” (some of whom also don’t understand the technical bits). I sincerely hope that there isn’t a backlash on me here from either side for just voicing an opinion, some friends of mine have deliberately stayed quiet because of this – I’m exercising my right to free speech and I hope people respect that.

The truth is, this is not a question of technology engineering and encryption, it is a question of policy and what we as a society want and expect. If a member of my family is murdered do I expect the police to be able to do their job and investigate everything that was on that person’s phone? Absolutely. Conversely, if I was accused of a crime that I didn’t commit and I wasn’t in a position to handover the password (see Matthew Green’s muddy puddle test), would I also want them to do it? Of course. It is called justice.

Dealing with the world as it is

The mobile phones and digital devices of today replace all of our previous scraps of notepaper, letters, diaries, pictures etc that would have been left around our lives. If someone is murdered or something horrific happens to someone, this information could be used to enable the lawful investigation of a crime. The Scenes of Crime Officer of the past and defence team would have examined all of these items and ultimately present the evidence in court, contributing to a case for or against. Now consider today’s world. Everything is on our phone – our diaries and notes are digital, our pictures are on our phones, our letters are emails or WhatsApp messages. So in the case of the scene of a crime, the police may literally be faced with a body and a phone. How is the crime solved and how is justice done? The digital forensic data is the case.

Remember, someone who has actually committed a crime is probably going to say they didn’t do it. The phone data itself is usually more reliable than witnesses and defendant testimony in telling the story of what actually happened and criminals know that. I’ve been involved with digital forensics for mobile devices in the past and have seen first-hand the conviction of criminals who continually denied having committed a serious crime, despite their phone data stating otherwise. This has brought redress to their victim’s families and brought justice for someone who can no longer speak.

There is no easy answer

On the other side of course, we’re carrying these objects around with us every day and the information can be intensely private. We don’t want criminals or strangers to steal that information. The counter-argument is that the mechanisms and methods to facilitate access to encrypted material would fall into the hands of the bad guys. And this is the challenge we face – there is absolutely no easy answer to this. People are also worried that authoritarian regimes will use the same tools to help further oppress their citizens and make it easier for the state to set people up. Sadly I think that is going to happen anyway in some of those places, with or without this issue being in play.

US companies are also fighting hard to sell products globally and they need to recover their export position following the Snowden revelations. It is in their business interests to be seen to fight these orders in order to s
ell product. It appears that Tim Cook wants to reinforce Apple’s privacy marketing message through this fight. Other less scrupulous countries are probably rubbing their hands in glee watching this show, whilst locally banning encryption, knowing that they’ll continue doing that and attempting to block US-made technology whatever the outcome of the case.

Hacking around

Even now, I have seen tweets from iPhone hackers who are more than capable of an attempt to solve this current case and no doubt they would gain significant amounts financially from doing so – because the method that they develop could potentially be transferable.

This is the same battle that my colleagues in the mobile world fight on a daily basis – a hole is found and exploited and we fix it; a continual technological arms race to see who can do the better job. Piers Morgan has a point, just badly put – given enough time, effort and money the San Bernadino device and encryption could be broken into – it will just be a hell of a lot. It won’t be broken by a guy in a shop on Tottenham Court Road (see my talk on the history of mobile phone hacking to understand this a bit more).

Something that has not been discussed is that we also have a ludicrous situation now whereby private forensic companies seem to be ‘developing’ methods to get into mobile handsets when in actual fact many of them will either re-package hacking and rooting tools and pass them off as their own solutions, as well as purchasing from black and grey markets for exploits, at premium prices. This is very frustrating for the mobile industry as it contributes to security problems. Meanwhile, the Police are being forced to try and do their jobs with not just one hand tied behind their back, it now seems like two. So what should we do about that? What do we consider to be “forensically certified” if the tools are based on fairly dirty hacks?

How do we solve the problem?

We as democratic societies ask and expect our Police forces to be able to investigate crimes under a legal framework that we all accept via the people we elect to Parliament or Senate. If the law needs to be tested, then that should happen through a court – which is exactly what is happening now in the US. What we’re seeing is democracy in action, it’s just messy but at least people in the US and the UK have that option. Many people around the world do not.

On the technical side, we will need to also consider that there are also a multitude of connected devices coming to the market for smart homes, connected cars and things we haven’t even thought of yet as part of the rapidly increasing “Internet of Things”. I hate to say it, but in the future, digital forensics is going to become ever more complex and perhaps the privacy issues for individuals will centre on what a few large technology companies are doing behind your back with your own data rather than the Police trying to do their job with a legal warrant. Other companies need to be ready to step up to ensure consumers are not the product.

I don’t have a clear solution to the overall issue of encrypted devices and I don’t think you’ll thank me for writing another thousand words on the topic of key escrow. Most of the time I respond to people by saying it is significantly complex. The issues we are wrestling with now do need to be debated, but that debate needs to be intellectually sound and unfortunately we are hearing a lot from people with loud voices, but less from the people who really understand. The students I’m meeting next week will be not only our future engineers, but possibly future leaders of companies and even politicians so it is important that they understand every angle. It will also be their future and every other young person’s that matters in the final decision over San Bernadino.

Personally, I just hope that I don’t keep getting angry and end up sat in my dressing gown until lunchtime writing about tweets I saw at breakfast time.

Helping ordinary mobile phone users manage their security

My company recently completed some work for the UK Police about giving some basic guidance on mobile phone security. It seemed to them (and to us) that there is a gap between the daily deluge from the media of new threats to mobile users and understanding the real situation (which is often highly technical). What this often means is that users are just completely forgotten in a sea of meaningless rhetoric. People using phones inevitably then do the wrong thing. We also found that the organisations setting policies also want to give basic advice to people about how they use their phones in their daily lives.

We wrote quite a long whitepaper (which will soon be available as a booklet) but with the help of the excellent team at Beyond Design, we decided to also create a leaflet that was easy to understand and which would capture the main points easily. After all, what we’re looking for is for people to remember and adopt the advice we’re giving out. The advice covers things like:

  • Personal safety
  • Lost and stolen devices
  • Using the features of your device securely
  • The types of threats you need to be aware of
  • Things that you can do to mitigate security issues or to help prevent them happening

We’ve had some good initial feedback and I understand a couple of universities in the UK are looking to distribute the leaflets for their students too.

What risks are you taking?

Free leaflet

I’ve decided to make the leaflet freely available for download and printing – you can take the print ready version and send it to a local printers or online service and then use it for your own purposes. Just click the links below to get a copy:

Mobile Security Advice leaflet (online version)
Mobile Security Advice leaflet (print-ready version)

I hope this is useful to people and we’d love to hear your feedback and who you’ve given the leaflets to. Drop us a line or add a comment below!

A note on giving out advice

The danger of course with doing something like this is that we a) miss something important or give bad advice and that b) the advice would be impractical and be ignored. We would hope that we have given out good advice based on our own experience, but please let us know if you really disagree with something. We acknowledge that there is a risk of b), but we also acknowledge that giving people nothing and just leaving them to fend for themselves is ultimately worse. Everything we do from a security perspective in our personal lives is about risk management decisions (or risk avoidance). Just as not every alley is going to have some guy lurking down it waiting to rob you, not every open WiFi connection you connect to is going to be compromised. It’s good to be at least ‘aware’ of the risks though.

An interview with a tech journalist

I was slightly misquoted in an article yesterday on mobile malware, so I thought I’d re-post my exact responses to the journalist as I spent a fair amount of time out of my evening to respond to the request instead of relaxing! With Mobile World Congress coming up, some of the topics covered are relevant to things that will be discussed in Barcelona.

Good tech journalism?

My comments were in response to a BlueCoat Systems report on mobile malware that came out the on the 11th of February. I didn’t get the chance to see the report until the very end, so my last comment is based on my skim read of the report. The questions you see below are from the journalist to me.

Here was my response (me in blue):

Here are my responses, let me know if you need anything else. I didn’t read the report yet.
They are marked [DAVID]:

David –

I’m doing a story on a recent report from Blue Coat about mobile malware. No link yet.

My questions, if you have a few minutes:

It predicts that delivery of mobile malware with malnets will be a growing problem this year. Agree? Why or why not?

[DAVID] It’s possible, but the question is really ‘where’. Most mobile malware has taken root in places like China and Russia where there has been traditionally a lack of official app stores, (which has only recently changed). It’s like the wild west out there with a complete lack of controls on the ingestion side to check that developers aren’t peddling malware and on the consumer side because the devices are outside the ‘safe’ app store world we see in the West.

So we almost have two worlds at the moment: the first is the western world, mainly the Europe and the US where generally no-one gets infected (a tiny, tiny percentage of maliciousness gets through the official app store checks or gets intentionally side-loaded by the user, usually when they’re trying to get pirated software!). The second is the vast majority of the rest of the world, usually poorer countries where the controls and regulations on piracy and malware are lax. It is like putting a street market next to a high-end city shopping mall. The mobile industry isn’t static and will continue to evolve in terms of security and threat management both on the network and device side when it comes to the potential for botnets (at least in the more controlled environment of the West).

It says mobile devices are still relatively secure at the OS level, but that users are “set up to fail” because it is more difficult to avoid phishing –  URL and links are shortened, passwords are visible to an onlooker when you enter them – apps are not well vetted and mobile versions of websites are often hosted by third parties, making it difficult to tell which are legit. Do you agree? Why or why not? And if you do agree, is there anything developers ought to change?

[DAVID] Mobile OSs and their underlying hardware are getting very advanced in terms of security which is great news. The problem is that there hasn’t been enough invested into educating developers about how to develop secure software and in most cases the tools and libraries they use are not designed to help them make the right security decisions, resulting in very basic flaws which have serious security consequences (for example poor implementation of SSL). For some, it is just too difficult or too much effort to bother putting security in from the start. We need to break down that kind of mentality and I think we really need to improve considerably in terms of ‘cyber’ security skills around for mobile developers. In terms of usability and the lack of screen real-estate, then yes developers have a role to play in helping the user make the decision they want to – some QR readers now present the ‘real’ URI behind a shortened one in order that the user can decide whether that was what they were expecting.

Users can be very impulsive when it comes to mobile, so you have to try and save them from themselves, but balance this with not resorting to bombarding them with prompts. Human behaviour dictates that we’ll susceptible to social engineering and will get over any hurdle presented to us if the prize is worth enough (something which is called the ‘dancing pigs’ problem). This is a real problem for both the OS and application developers. One thing that hasn’t really been deployed yet in the mobile world is trusted 3rd party management of policy. Users could choose a policy provider they trust to take the security management problem away from them. Obviously it can’t solve everything – the user has to take responsibility for their own actions at some point, but it will go a long way to resolving current issues permissions and policy with mobile platforms. The key to it all is that the user themselves has to be ultimately in charge of who they choose as a policy provider, not the operator, OS vendor or manufacturer.

There’ll always be attackers – the arbiters of trust in the mobile world have great responsibility to the millions of users out there and they themselves will become targets. I like the way that Google Bouncer (the automated security testing tool of Android apps being submitted by developers) has now become the target of attacks. To me, Google have forced attackers back away from the ‘Keep’ to the castle walls which can only be a good thing.

[I’ve lumped all these questions together]

The report says user behavior is the major weakness. Hasn’t this been the case all along?

Is there any truly effective way to change user behavior?

Is it possible for security technology to trump user weaknesses? If so, how?

[DAVID] Yes user behaviour is a weakness, but usability and security don’t usually sit well together. Developers should not just consider the technical security of an application but make security as friendly and seamless as possible from the user’s perspective. Resorting to prompting is usually the lazy way out and it pushes the burden of responsibility onto a user who probably doesn’t have a clue what you just asked them. I think OS level and web APIs could benefit from different design patterns – how about building in more intelligence to the responses? For example in a geolocation API a developer could ‘negotiate’ access by understanding what the user is comfortable with, all in the background. This avoids binary behaviour – for example: apps, that fall over if you don’t enable geolocation and users that never install apps that have geolocation. Both situations are not very good for helping the apps world advance and grow! However, if the user had been able to say that they were happy to share their location to city level, then the API could negotiate the request from a developer for location down to 1 metre by offering up city level instead. It would make for a much smoother world and would apply very easily across many different APIs.

If a user makes a critically bad decision, for example going to an infected website, I think Google have taken a strong lead in this respect by clearly showing to the user that really bad things are happening. Perhaps this could extend to other things on mobile, but we still need to get the basics of security right first from a technology and manufacturer’s perspective. I think some manufacturers have a long way to go to improve their security in this respect.

It says users will go outside VPNs if the “user experience” is not good within it. Is it realistic to expect enterprises to make their user experience better?

[DAVID] I think there are some interesting things coming along in terms of more ‘usable’ VPN technology, but usually the reason a VPN doesn’t work is a technical one that an ordinary user isn’t going to understand. They just want to get their job done and may take risky decisions because there are generally no visible security consequences. Most people in big companies have to deal with inflexible IT departments with inflexible policies. The intrusion into people’s own lives with the introduction of BYOD has muddled things further. I can certainly see more societal issues than security ones for the overall user experience – for example it might be very tempting for companies to start intruding on their users if there is a big industrial dispute involving unions. I don’t think these questions have properly hit companies yet, but mobile companies like RIM are looking at proper separation of work and personal life from a technical point of view, after that it is really down to the paperwork – the rules of use and the enforcement of those.

The report said Android is more vulnerable to attacks because of unregulated apps and the diversity of Android-based devices. What, if anything, can/should be done about that?

[DAVID] Well to a certain extent yes, but this has been vastly overplayed by anti-virus vendors desperate to get into mobile. The vast majority of maliciousness has been caused outside of the trusted app store world that we see in the US and the UK. I wouldn’t have designed the app signing process in the same way as the Android guys did, but then identification of individuals can be difficult anyway – I know lots of registration systems that can be broken just by photocopies of ‘official’ documents. Google wanted a more open ecosystem and you have to take the good with the bad. In terms of the diversity or fragmentation in Android, this could become an issue as device lifecycles get longer. The mobile industry is looking at the software update problem and rightly so. For the network operators it is going to be a question of how to identify and manage out those threats on the network side if it comes to it. I don’t think software upgrade issues are confined to Android but we don’t want any of the industry to lag behind because in the future there is nothing to say that huge distributed cross-platform (automotive, mobile, home) threats could exist, so we should pay attention to resilience and good cyber house-keeping now before it is too late.

Sorry to be on a deadline crunch – 5:30 p.m. EST today.

And my final comment to the journalist after I’d seen the report:

So just had a quick look through, only one final comment:

One thing that we all should remember is that the bad guys are not the mobile industry – it is the people who perpetrate malware, spam and scams. At the moment, cyber criminals run rings around law enforcement by operating across lots of countries in the world, relying on fragmented judicial systems and the lack of international agreements to take action. We should build the systems and laws through which we can arrest and prosecute criminals at a global level. 

I hope readers find it useful to see what I really wanted to say – I don’t claim to be right, but these are my opinions on the subjects in question. Readers should also understand how much effort sometimes gets put into helping journalists, with varying results 😦. If you want to read the original article and compare my responses with the benefit of context, you can find it at CSO online.