The Future of Cyber Security and Cyber Crime

David Wood kindly invited me to speak at the London Futurists cyber security and cyber crime event along with Craig Heath and Chris Monteiro. I decided to talk about some more future looking topics than I normally do, which was quite nice to do. The talks were videoed and linked below (my talk starts about 39:29). I should add that the Treaty of Westphalia was 1648, not 1642!:

Here are my slides:

An interview with a tech journalist

I was slightly misquoted in an article yesterday on mobile malware, so I thought I’d re-post my exact responses to the journalist as I spent a fair amount of time out of my evening to respond to the request instead of relaxing! With Mobile World Congress coming up, some of the topics covered are relevant to things that will be discussed in Barcelona.

Good tech journalism?

My comments were in response to a BlueCoat Systems report on mobile malware that came out the on the 11th of February. I didn’t get the chance to see the report until the very end, so my last comment is based on my skim read of the report. The questions you see below are from the journalist to me.

Here was my response (me in blue):

Here are my responses, let me know if you need anything else. I didn’t read the report yet.
They are marked [DAVID]:

David –

I’m doing a story on a recent report from Blue Coat about mobile malware. No link yet.

My questions, if you have a few minutes:

It predicts that delivery of mobile malware with malnets will be a growing problem this year. Agree? Why or why not?

[DAVID] It’s possible, but the question is really ‘where’. Most mobile malware has taken root in places like China and Russia where there has been traditionally a lack of official app stores, (which has only recently changed). It’s like the wild west out there with a complete lack of controls on the ingestion side to check that developers aren’t peddling malware and on the consumer side because the devices are outside the ‘safe’ app store world we see in the West.

So we almost have two worlds at the moment: the first is the western world, mainly the Europe and the US where generally no-one gets infected (a tiny, tiny percentage of maliciousness gets through the official app store checks or gets intentionally side-loaded by the user, usually when they’re trying to get pirated software!). The second is the vast majority of the rest of the world, usually poorer countries where the controls and regulations on piracy and malware are lax. It is like putting a street market next to a high-end city shopping mall. The mobile industry isn’t static and will continue to evolve in terms of security and threat management both on the network and device side when it comes to the potential for botnets (at least in the more controlled environment of the West).

It says mobile devices are still relatively secure at the OS level, but that users are “set up to fail” because it is more difficult to avoid phishing –  URL and links are shortened, passwords are visible to an onlooker when you enter them – apps are not well vetted and mobile versions of websites are often hosted by third parties, making it difficult to tell which are legit. Do you agree? Why or why not? And if you do agree, is there anything developers ought to change?

[DAVID] Mobile OSs and their underlying hardware are getting very advanced in terms of security which is great news. The problem is that there hasn’t been enough invested into educating developers about how to develop secure software and in most cases the tools and libraries they use are not designed to help them make the right security decisions, resulting in very basic flaws which have serious security consequences (for example poor implementation of SSL). For some, it is just too difficult or too much effort to bother putting security in from the start. We need to break down that kind of mentality and I think we really need to improve considerably in terms of ‘cyber’ security skills around for mobile developers. In terms of usability and the lack of screen real-estate, then yes developers have a role to play in helping the user make the decision they want to – some QR readers now present the ‘real’ URI behind a shortened one in order that the user can decide whether that was what they were expecting.

Users can be very impulsive when it comes to mobile, so you have to try and save them from themselves, but balance this with not resorting to bombarding them with prompts. Human behaviour dictates that we’ll susceptible to social engineering and will get over any hurdle presented to us if the prize is worth enough (something which is called the ‘dancing pigs’ problem). This is a real problem for both the OS and application developers. One thing that hasn’t really been deployed yet in the mobile world is trusted 3rd party management of policy. Users could choose a policy provider they trust to take the security management problem away from them. Obviously it can’t solve everything – the user has to take responsibility for their own actions at some point, but it will go a long way to resolving current issues permissions and policy with mobile platforms. The key to it all is that the user themselves has to be ultimately in charge of who they choose as a policy provider, not the operator, OS vendor or manufacturer.

There’ll always be attackers – the arbiters of trust in the mobile world have great responsibility to the millions of users out there and they themselves will become targets. I like the way that Google Bouncer (the automated security testing tool of Android apps being submitted by developers) has now become the target of attacks. To me, Google have forced attackers back away from the ‘Keep’ to the castle walls which can only be a good thing.

[I’ve lumped all these questions together]

The report says user behavior is the major weakness. Hasn’t this been the case all along?

Is there any truly effective way to change user behavior?

Is it possible for security technology to trump user weaknesses? If so, how?

[DAVID] Yes user behaviour is a weakness, but usability and security don’t usually sit well together. Developers should not just consider the technical security of an application but make security as friendly and seamless as possible from the user’s perspective. Resorting to prompting is usually the lazy way out and it pushes the burden of responsibility onto a user who probably doesn’t have a clue what you just asked them. I think OS level and web APIs could benefit from different design patterns – how about building in more intelligence to the responses? For example in a geolocation API a developer could ‘negotiate’ access by understanding what the user is comfortable with, all in the background. This avoids binary behaviour – for example: apps, that fall over if you don’t enable geolocation and users that never install apps that have geolocation. Both situations are not very good for helping the apps world advance and grow! However, if the user had been able to say that they were happy to share their location to city level, then the API could negotiate the request from a developer for location down to 1 metre by offering up city level instead. It would make for a much smoother world and would apply very easily across many different APIs.

If a user makes a critically bad decision, for example going to an infected website, I think Google have taken a strong lead in this respect by clearly showing to the user that really bad things are happening. Perhaps this could extend to other things on mobile, but we still need to get the basics of security right first from a technology and manufacturer’s perspective. I think some manufacturers have a long way to go to improve their security in this respect.

It says users will go outside VPNs if the “user experience” is not good within it. Is it realistic to expect enterprises to make their user experience better?

[DAVID] I think there are some interesting things coming along in terms of more ‘usable’ VPN technology, but usually the reason a VPN doesn’t work is a technical one that an ordinary user isn’t going to understand. They just want to get their job done and may take risky decisions because there are generally no visible security consequences. Most people in big companies have to deal with inflexible IT departments with inflexible policies. The intrusion into people’s own lives with the introduction of BYOD has muddled things further. I can certainly see more societal issues than security ones for the overall user experience – for example it might be very tempting for companies to start intruding on their users if there is a big industrial dispute involving unions. I don’t think these questions have properly hit companies yet, but mobile companies like RIM are looking at proper separation of work and personal life from a technical point of view, after that it is really down to the paperwork – the rules of use and the enforcement of those.

The report said Android is more vulnerable to attacks because of unregulated apps and the diversity of Android-based devices. What, if anything, can/should be done about that?

[DAVID] Well to a certain extent yes, but this has been vastly overplayed by anti-virus vendors desperate to get into mobile. The vast majority of maliciousness has been caused outside of the trusted app store world that we see in the US and the UK. I wouldn’t have designed the app signing process in the same way as the Android guys did, but then identification of individuals can be difficult anyway – I know lots of registration systems that can be broken just by photocopies of ‘official’ documents. Google wanted a more open ecosystem and you have to take the good with the bad. In terms of the diversity or fragmentation in Android, this could become an issue as device lifecycles get longer. The mobile industry is looking at the software update problem and rightly so. For the network operators it is going to be a question of how to identify and manage out those threats on the network side if it comes to it. I don’t think software upgrade issues are confined to Android but we don’t want any of the industry to lag behind because in the future there is nothing to say that huge distributed cross-platform (automotive, mobile, home) threats could exist, so we should pay attention to resilience and good cyber house-keeping now before it is too late.

Sorry to be on a deadline crunch – 5:30 p.m. EST today.

And my final comment to the journalist after I’d seen the report:

So just had a quick look through, only one final comment:

One thing that we all should remember is that the bad guys are not the mobile industry – it is the people who perpetrate malware, spam and scams. At the moment, cyber criminals run rings around law enforcement by operating across lots of countries in the world, relying on fragmented judicial systems and the lack of international agreements to take action. We should build the systems and laws through which we can arrest and prosecute criminals at a global level. 

I hope readers find it useful to see what I really wanted to say – I don’t claim to be right, but these are my opinions on the subjects in question. Readers should also understand how much effort sometimes gets put into helping journalists, with varying results 😦. If you want to read the original article and compare my responses with the benefit of context, you can find it at CSO online.