Updating the Future

Later today I’ll be speaking at B-Sides London about software updates and how they are probably the only effective mechanism that can defend users against the malicious use of discovered, exploitable vulnerabilities. Despite that, we still have a long way to go and the rush towards everything being connected could leave users more exposed than they are now.

The recent “effective power” SMS bug in iOS really showed that even with a relatively minor user interface bug, there can be widespread disruption caused and in that case mainly because people thought it would be funny to send it to their friends.

The state of mobile phone updates

In vertical supply chains that are generally wholly owned by the vendor (as in the Apple case), it is relatively straightforward to deploy fixes to users. The device’s security architecture supports all the mechanisms to authenticate itself correctly, pick up a secure update and unpack it, verify and deliver it to the user. The internal processes for software testing and approval are streamlined and consistent so users can get updates quickly. This is not the case for other operating systems. Android users have a very complicated supply chain to deal with unless they have a Google supplied device. Mobile network interoperability issues can also cause problems, so network operators have to drive test every device and approve the updates that come through. Security updates are often bundled with other system updates, meaning that critical security issues can stay open because users just don’t get them fixed for months on end.

That’s if they get an update at all. Some manufacturers have a very chequered history when it comes to supporting devices after they’ve left the factory. If users are not updated and they’re continually exposed to serious internet security flaws such as those experienced with SSL, who is responsible? At the moment it seems nobody is. There is no regulation that says that users must be updated. There seems to be a shift in the mobile industry towards longer software support lifecycles – Microsoft has committed to 36 months support and Google at least 18 months, but there is still a long way to go in terms of ensuring that patch teams at manufacturers remain available to fix security issues and ensuring that an ‘adequate’ end-of-life for products is achieved and communicated properly to users.

The internet of abandoned devices

A lot of IoT devices have no ability to be updated, let alone securely. The foundations are simply not there. There is no secure boot ROM, a secure anchor of trust from which to start from, there is no secure booting mechanism to carefully build up trust as the device starts and web update mechanisms are often not even secured using SSL. Software builds are often as not unencrypted and certainly not digitally signed.

So with this starting point for our future, it appears that many of the hard lessons of the mobile phone world have not seen transference to the IoT world. Even then, we have a lot of future challenges. Many IoT devices or elements of the automotive space are ‘headless’ – they have no user display or interface, so the user themselves has no inkling of what is going on, good or bad. What is often termed “cyber-physical” can rapidly become real issues for people. A problem with an update to a connected health device can really harm a lot of people. Shortly before Google’s acquisition of Nest, a user had tweeted complaining that his pipes had burst. Understanding that certain services cannot just be turned off to allow for an update is key to engineering in this space.

Many of the devices that are planned to be deployed are severely constrained. Updating a device with memory and battery limitations is going to be possible only in limited circumstances. Many of these devices are going to be physically inaccessible too, but still need to be trusted. It’s not simply a question of replacement of obsolete devices – digging a vibration sensor out of the concrete of a bridge is going to be pretty cumbersome. Some of this space will require systems architecture re-thinking and mechanisms to be able to live with the risk. It may be that is simply impossible to have end-to-end security that can be trusted for any real length of time. As engineers if we start from the point that we can’t trust anything that has been deployed in the field and that some of it can’t be updated at all, we might avoid some serious future issues.

Phone Hacking: A lucrative, but largely hidden history

I’m giving a talk at Defcon London DC4420 tonight. I decided to talk about the history of some stuff that is not really well known about outside of the mobile industry and a few embedded systems hacking circles.

For years, the mobile industry and its suppliers have fought an ongoing battle with people hacking mobile devices. This mainly started out with greyhat crackers from the car radio scene supplying tools to ‘reset’ your car radio PIN code (I’m not sure whether really driven by thieves or end users?).

This matured into SIMlock and IMEI hacking on handsets at the end of the 1990s, driven by very cheap pre-pay handsets. By the way, I was never a big fan of SIMlock, as it just increased targeting of the devices and it just wasn’t that sensible as the time we didn’t have the hardware available in the industry to protect it properly. Mobile phone theft (and re-enablement) was another driver.

Ordinary users were sufficiently motivated to want to pay to remove their SIMlocks and a cottage industry built up to serve it, supplied by tools from some very clever hackers and groups. This made some people very, very rich.

As skills have grown on both sides, the war between industry and the hacking community has grown increasingly sophisticated and tactical. Today it is mostly being played out within the rooting and jailbreaking community, but it looks like so-called ‘kill switch’ and anti-theft mechanisms will be a new motivator.

Anyway, I hope you find this taster presentation to the subject interesting!

Samsung Galaxy SIII data wiping on Android – just by visiting a website

Yesterday, Ravi Bogaonkar (@raviborgaonkar) released to the world an issue that could be one of the most serious to hit the mobile industry in a very long time.

Ravi who is based at the Technical University of Berlin’s SecT lab (who has previously been in the news for his research around hacking femtocells) had discovered that there were proprietary codes for wiping devices entirely (this is not a USSD code as per spec which has incorrectly been reported). Ironically for the mobile industry, SecT is sponsored by Deutsche Telekom.

These commands can be entered via the user interface, but can also be sent remotely, via visiting a rigged webpage which calls the dialler function. Normally, the user would have to physically confirm the number to dial by pressing the green receiver button, but not in this case.

Currently, reports are coming in saying that a number of Android devices may be affected, including not only Samsung devices (the Galaxy SIII being amongst them) but also the HTC One X. It seems that devices in the UK may not be affected as they’re not using Samsung’s TouchWiz user interface, but details are still emerging.

Dangerous disclosure?

Ravi apparently made a responsible disclosure to a number of affected manufacturers and operators but after apparently getting frustrated with months of delays from certain operators decided to go public. My take on this is that there appears to have been a failing on both sides here. Without knowing all the details it is difficult to make a judgement, however I feel that making this public when the vulnerability is so easy to reproduce and has such massive destructive implications for users is bordering on criminal. Equally, if an operator has been sat on this fix for months for no good reason (and I don’t know if that is the case), then that is just as bad.

Just imagine how you would feel if you lost all of your pictures on your phone just because you visited a website.

How to test if you’re vulnerable and how to fix it temporarily

German mobile security researcher Collin Mulliner has released a temporary fix to Google Play called ‘telstop‘, which people can download if they’re concerned.

A test page setup by Ravi is available which will send the user interface command to display the IMEI number (*#06#). Just navigate with your phone to this link: http://www.isk.kth.se/~rbbo/testussd.html – if you see your IMEI number displayed instead, then you are vulnerable.

17:00 26/09/12 Update: Ravi’s test page was using Google Analytics to track who is testing. I have setup a separate test page that does not use analytics. Just point your mobile browser at: http://mobilephonesecurity.org/tel

More detail can be found in this article and a video of Ravi’s presentation is below:

 

Blackhat & DEFCON19 – mobile presentations

With the main sessions of Blackhat starting tomorrow morning (Las Vegas time), I’ve posted the mobile-related talks here for those who are interested.

The mobile hacking training course which took place today (I think) was sold out. What has interested me the most is the increase in interest from the security and hacking community in all types of mobile platforms. As you’ll see below, there are really quite a few presentations focussed on mobile. Also, as smartphones become more advanced, a lot of the other presentations not listed here become relevant (for example web application security). I just want to highlight two of the presentations: ‘Aerial Cyber Apocalypse’ which will demonstrate a UAV equipped with WiFi and GSM hacking capabilities (see the picture below) and ‘War Texting: Identifying and Interacting with Devices on the Telephone Network’ which shows attacks on car systems which use SMS to remote control the car. Fun in the sun.

From: http://www.geek.com/articles/geek-pick/wasp-the-linux-powered-flying-spy-drone-that-cracks-wi-fi-gsm-netwokrs-20110729/

Blackhat USA 2011 (Briefings 3-4th August)
Schedule: https://www.blackhat.com/html/bh-us-11/bh-us-11-schedule.html

Don A. Bailey:
War Texting: Identifying and Interacting with Devices on the Telephone Network

Karsten Nohl + Chris Tarnovsky:
Reviving smart card analysis

Andrey Belenko
Overcoming IOS Data Protection to Re-enable iPhone Forensics

Ravi Borgaonkar + Nico Golde + Kevin Redon:
Femtocells: A poisonous needle in the operator’s hay stack

Dino Dai Zovi:
Apple iOS Security Evaluation: Vulnerability Analysis and Data Encryption

Richard Perkins + Mike Tassey:
Aerial Cyber Apocalypse: If we can do it… they can too.

Long Le + Thanh Nguyen:
ARM exploitation ROPmap

Jennifer Granick:
The Law of Mobile Privacy and Security

Riley Hassell + Shane Macaulay:
Hacking Androids for Profit

Tyler Shields + Anthony Lineberry + Charlie Miller + Chris Wysopal + Dino Dai Zovi + Ralf-Phillipp Weinmann + Nick Depetrillo + Don Bailey:
Owning Your Phone at Every Layer – A Mobile Security Panel

DEFCON19: (4th-7th August)
Schedule: https://www.defcon.org/html/defcon-19/dc-19-index.html

Abusing HTML5

Cellular Privacy: A Forensic Analysis of Android Network Traffic

Getting SSLizzard

This is REALLY not the droid you’re looking for…

Mobile App Moolah: Profit taking with Mobile Malware

Wireless Aerial Surveillance Platform

Seven Ways to Hang Yourself with Google Android

Staying Connected during a Revolution or Disaster

So, plenty to keep everyone going then! It’ll be interesting to see what the next few weeks bring.

Chrome app security model is broken

I’m worried. I’m worried for a lot of users who’ve installed Chrome Apps. I was idly browsing the Apps in the Chrome web store the other day and came across the popular Super Mario 2 app on the front page (over 14k users). I have to admit, I actually installed the app (extension) myself, so let me explain the user (and security) experience.

I saw the big splash screen for the flash game and thought I’d give it a try. There is a big install button (see picture). Installation is pretty instantaneous. As I looked at the screen, I saw the box to the bottom right. “This extension can access: Your data on all websites, Your bookmarks, Your browsing history”. I think I can legitimately give my mental response as “WTF!?! This is a game! What does it need access to all this for?”. I then immediately took steps to remove the app.

Removing the app

So, disabling and removing the app was not as straightforward as you would think and this was also quite annoying. The Chrome web store also includes ‘extensions’ to Chrome (the extensions gallery). These are not easily visible to a user as to where they’re installed. In fact, you have to go to the settings->tools->extensions to do anything about it. The normal installed Chrome apps are listed when you open a new tab (ctrl-t), but this is not the case for extensions.

Permissions by default

Having removed the app, I set about investigating precisely what I had exposed this app to and the implications. Under the “Learn more” link, I found a full description of permissions that could be allowed by an application. I had to cross-reference these back to what the app / extension had asked for. The picture below shows the permissions (expanded) for the Super Mario 2 game.

I don’t want to go into great detail about the ins and outs of what some people would term “informed consent” or “notified consent”, but the bottom line is that a hell of a lot is being given away with very little responsibility on Google’s part. After all, to the average user, the Chrome ‘chrome’ is an implicit guarantor of trust. A Google app store, the apps must have been checked out by Google, right?

I also won’t go into the top line “All data on your computer…” which installs an NPAPI plug-in which is essentially gameover in terms of access to your computer. To be fair to Google, their developer guidelines (below) state that any applications using this permission will be manually checked by Google. However, there is an implication there that the other applications and extensions aren’t.

So let’s concentrate on the permissions that are requested by the game.

  1. The first one, ‘Your bookmarks’ allows not only reading, but modification and additions to your bookmarks. Want setting up for something anyone? A legitimate link to your bank going to a phishing site?
  2. The second item, ‘Your browsing history’ for most people is going to reveal a lot. Very quickly, a motivated attacker is going to know where you live from your searches on google maps, illnesses you’re suffering and so on. There is a note here that this permission request is ‘often a by-product of an item needing to opening new tabs or windows’. Most engineers would call this, frankly, a half-arsed effort.
  3. The third item, ‘Your data on all websites’ seems to give permission for the application to access anything that I’m accessing. Then, the big yellow caution triangle: ‘Besides seeing all your pages, this item could use your credentials (cookies) to request your data from websites’. Woah. Run that one by me again? That’s a pretty big one. So, basically your attacker is home and dry. Lots of different types of attack exist to intercept cookies which will automatically authenticate a user to a website. This has been demonstrated against high-profile sites such as twitter and facebook by using tools such as firesheep. Given that it is a major threat vector, surely Google would have properly considered this in their permissioning and application acceptance model?

It’s pretty obvious how potentially bad the Mario extension could be, particularly when this is supposed to be just a flash game. What really irks me though is the ‘permissions by default’ installation. You click one button and it’s there, almost immediately with no prompt. Now, I’m not the greatest fan of prompts, but there are times when prompts are appropriate and install time is actually one of them. It gives me the chance to review what I’ve selected and make a decision, especially if I hadn’t spotted that information on a busy and cluttered webpage. I hear you all telling me that no-one reviews permissions statements in Android apps, so why would they do it here and yes, I partially agree. Human behaviour is such that if there is a hurdle in front of us and the motivation to go after the fantastic ‘dancing pigs’ application is sufficiently high, we’ll jump over the hurdle at any cost. There is also a danger that developers will go down the route they have with facebook applications – users accept all the permissions or you don’t get dancing pigs. Users will more than likely choose dancing pigs (see here for more info on dancing pigs).

The beauty of a well designed policy framework

So we’re not in an ideal world and everyone knows that. I firmly believe that there is a role for arbitration. Users are not security experts and are unlikely to make sensible decisions when faced with a list of technical functionality. However, the user must be firmly in control of the ultimate decision of what goes on their machine. If users could have a little security angel on their shoulder to advise them what to do next, that would give them much more peace of mind. This is where configurable policy frameworks come in. A fair bit of work has gone on in this area in the mobile industry through OMTP’s BONDI (now merged with JIL to become WAC) and also in the W3C (and sadly just stopped in the Device APIs and Policy working group). The EU webinos project is also looking at a policy framework. The policy framework acts in its basic sense as a sort of firewall. It can be configured to blacklist or whitelist URIs to protect the user from maliciousness, or it can go to a greater level of detail and block access to specific functionality. In combination with well-designed APIs it can act in a better way than a firewall – rather than just blocking access it gives a response to the developer that the policy framework prevented access to the function (allowing the application to gracefully fail rather than just hang). Third party providers that the user trusts (such as child protection charities, anti-virus vendors and so on) could provide policy to the user which is tailored to their needs. ‘Never allow my location to be released’, ‘only allow googlemaps to see my location’, ‘only allow a list of companies selected by ‘Which?’ to use tracking cookies’ – these are automated policy rules which are more realistic and easy for users to understand and which actually assist and advance user security.

Lessons for Google

Takedown – Looking at some of the comments from users on the Super Mario game, it is pretty clear people aren’t happy, with people mentioning the word virus, scam etc. The game has been up there since April – at the end of May, why haven’t Google done anything about it? The game doesn’t seem to be official, so it is highly likely to be in breach of Nintendo’s copyright. Again, why is this allowed in the Chrome web store? Is there any policing at all of the web store? Do Google respond to user reports of potentially malicious applications in a timely manner?

Permissions and Access – You should not have to open up permissions to your entire browsing history for an application to open a new tab! This is really, really bad security and privacy design.

Given what is happening with the evident permissiveness of Android and the Chrome web store, Google would do well to sit up and start looking some better solutions otherwise they could be staring regulation in the face.

Bootnote

I mentioned this to F-Secure’s Mikko Hypponen (@mikkohypponen) on Twitter and there were some good responses from his followers. @ArdaXi quite fairly pointed out that just to open a new window, a developer needed the to allow Chrome permission to access ‘Your browsing history’ (as discussed above). @JakeLSlater made the point that “google seem to be suggesting content not their responsibility, surely if hosted in CWS it has to be?” – I’m inclined to agree, they have at least some degree of responsibility if they are promoting it to users.

I notice that Google seem to have removed the offending application from the web store too. I think this followed MSNBC’s great article ‘Super Mario’ runs amok in Chrome Web app store after they picked up on my link through Mikko. I think it may be fair to say that the extension has been judged malicious.

Android@Home – Now I’ll hack your house (part 1)

Very exciting news from Google I/O in San Francisco. Android@Home has been announced, a logical move and one which I would wager will be highly successful. With Google TV set to emerge in homes this year and a plan by Google to merge their phone, tablet and Google TV code into one build codenamed “Ice Cream Sandwich” at the end of the year, the company seem well positioned to take on home control. Google TV offers users the ability to control their TV from their Android phone amongst plenty of other features. This basic feature, to use your phone as a remote control for the TV has been something that users have been crying out for for years, with nobody paying any real attention to it. I do remember a great program called Nevo on the iPAQ on which you could control masses of IR equipment. I gained much amusement from changing the TV in the pub and works canteen to the confusion of the staff there.

Cost, Complexity and Fragmentation

Yet home control has never really caught-on. I put this down to a number of factors (which the mobile industry is well used to hearing): fragmentation, cost and complexity. The three factors have combined so far to prevent the market maturing in any sensible way. Yes, there are home control systems out there, but they are all pretty much proprietary. I’ve been considering whether to do some home control for years but the components are over-priced and I can’t interface with them with my own software. Take the example of a remote controlled socket kit from the UK’s B&Q or the control for remote lighting . Everything needs its own remote control. We want to use our mobiles! No doubt this is true of the designers and manufacturers of these products too, which is why I think Android@Home is going to be a roaring success. Others such as Bose may continue to sell the whole integrated system, continuing to target the niche high-end market but ultimately market forces will probably force them to ditch their proprietary system.

Setting up IP cameras in your home now also involves putting some software on your PC. A lot of users have switched to much better open source solutions such as iSpy just because of the poor quality and complexity of the setup of the proprietary (or badged) PC software.

So, in summary, as a normal person I don’t want to pay loads of money, I don’t want it to be difficult to setup and I want to run everything from the same software on my mobile phone.

In part 2, I will discuss some of the uses and why security is critical.







Confused Users and Insecure Platforms – the Perfect Storm Approaches

(picture from: http://commons.wikimedia.org/wiki/File:Storm_Approaching_Anna_Bay.JPG)
 

The evolution of hacking against mobile devices has been as rapid as the evolution of the device technology itself. Traditionally, mobile phone hacking has centred around the ‘embedded’ part of the phone, that is the electronic hardware. The software and firmware within the device was proprietary to that particular manufacturer so hackers and hacking groups specialised in a particular area. The knowledge and expertise needed to crack devices was very high and technically complex. As a result, it was difficult to understand and even though there was a large grey and black market centred around SIMlock removal and IMEI number changing, the media didn’t ever report it. Large amounts of money were made with lots of this going directly up the chain to the top. As the hacking technology developed, protection techniques were established in order to ensure that the revenue chain was always going back to the originator of the tool. To the ordinary user, they just knew that they could take their handset to a market and get it unlocked. The perception was that hacking a phone was easy.

The first real main-stream attention that embedded hacking got was around the iPhone. Existing mainstream hardware hacking groups had been involved in assisting George Hotz, a 17 year old at the time to create a hardware crack which would enable the removal of the SIMlock and ‘jailbreak’ the device, allowing non-Apple approved applications to be installed.

The public perception of hacking is extremely confused. The recent “phone hacking” scandal in the UK was really unauthorised access to voicemails on the servers of the mobile operators. Users don’t really understand where they are with regard to their own phone security or what they need to do. The anti-virus vendors in particular are responsible for sabre-rattling with respect to the threat to mobile devices. They have repeatedly declared “20xx” (choose a year) as “the year of the mobile virus”. This is simply false and shows a complete lack of understanding of the technologies involved. Indeed in 2004 one anti-virus solution completely filled the application memory of a phone such that no other application could be installed. Perfect protection then! There has been no mass malware outbreak to-date. The only ‘major’ incident was various variants of ‘commwarrior’ which was an MMS virus which propagated via users’ phonebooks. The anti-virus vendors have now been so discredited in the mobile space that they have used up their opportunities for funding and convincing users that they need to purchase protection. Ironically, the year is upon us where anti-virus would provide real value-add to users.

The perfect storm is approaching. The unification of devices under common platforms such as Google’s Android, easy application and widget development on an insecure platform (the web) and weak application policy mechanisms (such as deferring key decisions on permissions to the user) are all leading users down a dangerous path. There are mitigating factors though. The inherited knowledge from the days of PC viruses has allowed the development of some good security defence technologies and processes. Apple, at one end of the scale has a very rigorous application inspection process, both automatically and manually, whereas Android’s is much more open and therefore open to attack by malware authors. Sideloading of non-digitally signed applications is also generally restricted. In early March 2011, DroidDream was identified in around 50 applications supplied by 3 developers to the Android Market. These applications were originally legitimate but had been cracked and dressed up as Trojan versions of the originals. They were only spotted because someone noticed that the author was different to the original. Immediate action was taken by Google to remove the apps and ban the developers, but the malware is still out in the field at the time of writing – an estimation of between 50,000 and 200,000 downloads for one of the applications is quite a severe incident. Other incidents that have taken place over the past couple of years include suspected phishing applications on Android, attempts at creating mobile botnets in China, malicious multi-part SMS messages which crash phones through to rogue ‘Hello Kitty’ wallpaper applications which suck out user data and upload them to IP addresses in China.

It is clear that hacking against mobile devices is a developing discipline. The fight seems to be being won in the hardware space, but much more work needs to be done to protect users in the application space – and now. And the bottom line for consumers? They just want to be secure, without any hassle.