The teeth of the UK’s IoT security legislation – understanding the draft regulation text

It’s a nice spring Saturday in 2023 and what am I doing? I am sat inside listening to classical music and writing this instead of being outside enjoying the sunshine! But it is important…

The government announcement today: ‘Starting gun fired on preparations for new product security regime‘ published the draft secondary legislation details (pdf) to accompany the Product Security and Telecommunications Act. More specifically: ‘The Product Security and Telecommunications Infrastructure (Security Requirements for Relevant Connectable Products) Regulations, subject to parliamentary approval.’ The regime details are further explained here. In terms of IoT security, we have been waiting for this for a long time. I can tell you that I was discussing with government colleagues what we really mean by ‘password’ and what should be in and out for this part of the work in January 2020 just before the pandemic, which now seems like a lifetime ago.

For some background to this please read the following blog which also has a bunch of links to the previous history. You can also follow this twitter thread I’ve been running on the legislative progress. The relevant government department is now DSIT (the Department for Science, Innovation and Technology) and the work is based on the Code of Practice for Consumer IoT Security which evolved into the international standard ETSI EN 303 645.

So what is this new stuff? Well the primary legislation for the Product Security and Telecommunications Infrastructure Act (2022) was passed in December 2022 and that provides the overall framework. We’re looking specifically at Part 1 of the Act which deals with Product Security. The secondary legislation contains the specific text of the technical requirements that the government is seeking to regulate. The reason for this approach is pretty straightforward. The use of secondary legislation allows for adaption to accommodate the way that technology rapidly develops, without having to return to parliament for a completely new process of new legislation. This makes obvious sense and means that the legislation is both robust from a longevity perspective – i.e. there’ll always be a need to govern product security, but flexible to adapt to changing circumstances e.g. the market entirely gets rid of passwords for some new technology, or that the bar of security should be raised by adding new requirements.

What do the draft regulations say?

At a high level, here’s what’s in the draft (and I’m paraphrasing here):

  • There are three requirements – no default passwords, to have and act on a vulnerability disclosure policy and to be transparent about the minimum time consumers will get software updates.
  • The requirements will come into force on the 29th of April 2024 – so one year from the announcement and that’s all of three them, they are not staggered in any way.
  • If there are multiple manufacturers involved in a connected product, they all have to meet the requirements.

What are the technical requirements?

Schedule 1 outlines the security requirements for manufacturers. I’ve covered off the details of the requirements before, but to drill down a little bit into what the draft regulations say on the technical side:

On passwords:

The scope is pre-installed software on a device and hardware – but not when it’s in a factory default state. To be clear on this – it would be permissable to have a default password for pre-initialising a device, but only to put it into a state where unique password could be chosen. It’s not my ideal scenario if I’m honest but it is practical – if you’re in a position where you deploy a lot of devices, especially to consumers, there are a lot of situations where devices need to be reset including when the device is first turned on. I dream of the day that we don’t have to use passwords at all, but it is still a long way off and we can only play the cards we’re dealt. If you’re a manufacturer reading this, think very carefully about what you can do to avoid using passwords and what you’re doing in that factory initialisation state.

On the password detail:

  • They’ve got to be unique or defined by the user of the product.
  • They cannot be easily guessable, be based on incremental counters etc. (i.e. all the tricks that vendors use that they think are clever – XOR anyone?)
  • Passwords don’t include cryptographic keys, pairing data, API keys etc. (think about Bluetooth, Zigbee and so on – all important, but not the target of this particular requirement).

On how to report security issues (vulnerability disclosure):

The scope is the hardware and pre-installed software of the product as well as software that is installed (e.g. apps) that are needed for the IoT product to work properly.

  • There needs to be a point of contact for people to report security issues in a clear and accessible way (and it outlines what they expect in that sense).
  • The manufacturer has to acknowledge the report and then give status reports until the issue is resolved.

I hope this is a further wake-up call – my company has been producing research and data on this topic for the past 5 years for the IoT Security Foundation. This year’s report showed that only 27% of IoT manufacturers had vulnerability disclosure in place. Let’s see how it progresses this year.

On information on minimum security update periods:

The scope is as before – with software developed in connection with the purpose (so think of, say a smartphone app to control the IoT device).

  • The information on the support period must be published.
  • It must be accessible, clear and transparent (and they explain what that means).
  • When a product is advertised for sale – the manufacturer needs to list this information in the specs (I’m heavily paraphrasing here).
  • The manufacturer can’t shorten the update lifespan after they’ve published it (naughty!!)
  • If the update support period is extended, then the manufacturer needs to update that information and publish it.

After this, there is another Schedule (Schedule 2) which outlines the ‘Conditions for Deemed Compliance with Security Requirements’. This is very detailed but there are some important points within it which essentially boil down to the following:

  • A manufacturer complies with the requirements by implementing the relevant ETSI EN 303 645 (IoT security) provisions or ISO/IEC 29147 (vulnerability disclosure) paragraphs which are listed.

Schedule 3 then lists things that are out of scope – ‘Excepted connectable products’. There are some elements related to Brexit / free movement of goods but I won’t go into those here, but concentrate on the product parts:

  • Medical devices – these are covered by the Medical Devices Regulation 2002. However! If the bit that is regulated is only the software, then the hardware requirements of this IoT regulation apply.
  • Smart meters – if they’ve been installed by a licensed gas / electricity installer (which is defined) and have been through an assurance scheme (e.g. from NCSC), they’re out of scope.
  • Computers – desktop, laptops and tablets that aren’t connectable to the mobile network are out-of-scope. However if these types of products are designed for children under 14, then they are in scope.

There’s been a lot of debate over this in general over the years and I think I can summarise this by saying – we’re looking at IoT products. That is predominantly where the problem is and we’ve got to draw the lines in places. It’s sensible to keep already regulated domains out-of-scope but obviously there are some grey areas that you can easily think of (think wellness products vs medical products or whether you consider cars to be large IoT devices or not). I guess the key message is – some of this will evolve over time too. The beauty of secondary legislation is that it can shift too to react to how this fantastic technology embeds itself in our lives in the future.

The final Schedule explains what is needed for Statements of Compliance – i.e. to confirm that the product does indeed meet the requirements.

The draft regulations have been shared with the World Trade Organisation (WTO) under the country’s obligations under the Technical Barriers to Trade (TBTs) Agreement as well as the EU Commission. This is all really important because in no way does this legislation put blockers on trade around the world – it is designed to help consumers be protected from poorly secured products. With international standards being referenced within the regulations, it ensures that there is adoption of internationally agreed technical elements, reducing the risk of fragmentation and divergence between markets around the world.

To answer a couple of obvious questions that I’ve seen mentioned before in relation to PSTI:

Why are there not more technical requirements in the draft regulations?

On adding new requirements, here’s my view – the regulations refer to ETSI EN 303 645 and the Code of Practice. If you’re a manufacturer of IoT solutions, you should already be implementing those requirements anyway. The top three items that have been adopted into the draft regulations are the minimum and most impacting in terms of the issues we face. It doesn’t really matter if you’ve got great hardware security in place if you’ve got admin, admin default password access across the device or service, no way for security researchers to contact you if they’ve discovered a vulnerability in their product, or don’t ever bother updating the device.

The technical requirements were written ages ago, this is old stuff, what about [insert buzzword]?

This is not true – if you go through the Code of Practice and also the ETSI spec, they were specifically designed to primarily deliver outcomes – e.g. things we wanted to stop doing or things we wanted to see happen. A lot of it doesn’t specifically say how you have to get there. I’ve talked about this before, but in summary all of the requirements hold true whether this was 2016 or 2030 e.g.

  • I do not want default passwords in my device, someone will get in because they can be discovered easily.
  • Hardware security provides a foundation that enables a device to boot and operate more securely than without it.
  • Being transparent about software updates gives me, the consumer more information about whether I want to buy a product.
  • Having a public security contact at a company allows security researchers to report vulnerabilities, allowing problems to be solved in an efficient and timely manner, ultimately protecting the users of the products that company makes.

And so on… So don’t let anyone say that the requirements are out of date – whatever the technology (and I’ll go further than consumer to pretty much all IoT) – these requirements will continue to be directly applicable as long as we have electronics, software and malicious actors.

Next Steps

So what are the next steps? Well, the formal text on the government website states ‘Following their approval by Parliament, and the conclusion of the UK’s notification commitments under international treaties, the consumer connectable product security regime will enter into effect on 29 April 2024.’ Now the countdown is well and truly started.

The Long Road to a Law on Product Security in the UK

As the UK’s Product Security and Telecommunications Infrastructure Bill entered Parliament today, I had some time to reflect on how far we’ve come.

I was reminded today that today was a long time coming. The person who triggered this was someone that I worked with when I was at Panasonic and he was at Nokia. Twenty years ago, we were sat in one of the smallest meeting rooms at Panasonic Mobile, next to the smoking room as it was the only one available – the Head of Security Research from Vodafone, the Head of Security of GSMA, plus the Security Group Chair of GSMA and me.

The topic was hardware (IMEI) security and more broadly mobile phone security and how to deal with embedded systems hacking at an industry level. What kind of new measures could be brought in that would genuinely help to reduce the problem of mobile phone theft and make phones more secure? As they say, from small acorns, mighty oaks grow. I’d also argue it is probably quite a bit about persistence over a very long time.

It takes a very long time to make meaningful changes and while it’s easy to point out flaws, it’s harder to build new technology that addresses those in a game-changing way with complete industry buy-in. That’s pretty much what recommendations and standards bodies do, with the aim of seeking consensus – not complete agreement, but at least broad agreement on the means to effect large scale changes. Gradually and over a long period of time.

So we did that. Both in the Trusted Computing Group (TCG) and through the work of OMTP’s TR1: Advanced Trusted Execution Environment which led to chip-level changes across the industry and ushered in a new era of hardware security in the mobile phone industry, providing the foundation of future trust. All of this work nearly complete before an iPhone was on the market, I might add and well before Android! From our published work, we expected it to be in phones from around 2012 onwards and even then it took a little while before those OS providers hardened their systems sufficiently to be classed as really good security, but I should add that they have done a really good job of security leadership themselves since then.

With saturation in the smartphone space, around 2013/2014 the industry’s focus moved increasingly to the M2M (machine-to-machine) or IoT (Internet of Things) space, which had existed for a while but on a much smaller scale. A lot of things were coming together then – stuff was getting cheaper and more capable and it became increasingly viable to create more connected objects or things. But what we also saw were increasing numbers of companies ‘digitising’ – a washing machine vendor worried that they would be put out of business if they didn’t revolutionise their product by connecting it to the internet. That’s all well and good and I’m all for innovation, but the reality was that products were being put on the market that were really poor. With no experience of creating connected products, companies bought in ready-made solutions and platforms which came with little-to-no security measures. All the ports were exposed to the internet, default passwords were rife and never got changed, oh and software updates, what are they? It was and still is in many parts of the market, a mess.

Remember that this was new products being put into a market that was already a mess – for example, most webcams that had been sold for years were easy to access remotely and lots of tools had been created to make it even easier to discover and get into these devices, allowing intrusion into people’s private lives, their homes and their children.

Work began in organisations like the GSMA on creating security requirements for IoT that would force change. At the same time, hardware companies started to transfer their knowledge from the smartphone space into the hardware they were creating for the growing IoT sector. The IoT Security Foundation was established in late 2015 and the UK’s National Cyber Security Strategy from 2016-2021 stated that “the UK is more secure as a result of technology, products and services hacking cyber security designed into them by default”, setting us down the path that led us to the legislation introduction today. All of that work was an evolution and reinforcement of the growing body of product security recommendations that had already been created over a long period of time. Another thing I’ve observed is that in any particular time period, independent groups of people are exposed to the same set of issues, with the same set of tools and technologies at their disposal to rectify those issues. They therefore can all logically come to the same conclusions on things like how best to tackle the problem of IoT security.

In 2016, the Mirai attack happened (more info in the links below) and that helped to galvanise the support of organisations and politicians in understanding that large-scale insecurity in connected devices was a big and growing problem. A problem that was (mostly) easily solvable too. Other news stories and issues around IoT just added to this corpus of information that things weren’t well. You can also read more about the Code of Practice we created in the UK in the links below, but the key takeaway is this – there are small but fundamental changes that can raise the bar of cybersecurity substantially, reducing harm in a big way. This ranges from taking a firm stance on out-of-date and dangerous business practices e.g. companies and individuals being lazy, taking the easy route about things like default passwords and the hardware and software you use in your product development, to modernising the way that companies deal with security researchers – i.e. not threatening them and actually dealing with security issues that are reported by the good guys. So creating meaningful change is also about taking a stand against baked-in poor practice which has become endemic and so deeply entrenched throughout the world and its supply chains that it seems impossible to deal with.

I’ll never forget one meeting I was in where I presented a draft of the Code of Practice, where a guy from a technology company said “what we need is user education, not this”. I felt like I was on really solid ground when I was able to say “no, that’s rubbish. We need products that are built properly. For over 20 years, people have been saying we only need user education – it is not the answer”. I was empowered mainly because I could demonstrably show that user education hadn’t worked and perhaps that’s depressingly one of the reasons why we’re finally seeing change. Only in the face of obvious failure will things start to get better. But maybe I’m being too cynical. A head-of-steam was building for years. For example I was only able to win arguments about vulnerability disclosure and successfully countering “never talk to the hackers” because of the work of lots of people in the security research community who have fought for years to normalise vulnerability reporting to companies in the face of threats from lawyers and even getting arrested in some cases. And now we’re about to make it law that companies have to allow vulnerability reporting – and that they must act on it. Wow, just let that sink in for a second.

In the hacking and security research community, are some of the brightest minds and freest thinkers. The work of this community has been the greatest in effecting change. It may not be, in the words of someone I spoke to last week ‘professional’, when what I think they mean is ‘convenient’. The big splash news stories about hacks to insecure products actually force change in quite a big and public way and sadly the truth is that change wouldn’t have happened if it wasn’t for these people making it public, because it would have been mostly swept under the carpet by the companies. It is that inconvenient truth that often makes large companies uncomfortable – fundamental change is scary, change equals cost and change makes my job harder. I’m not sure this culture will ever really change, but uniquely in the tech world we have this counter-balance when it comes to security – we have people who actively break things and are not part of an established corporate ecosystem that inherently discourages change.

Over the past 10 years, we’ve seen a massive change in attitudes towards the hacking community as cyber security becomes a real human safety concern and our reliance on the internet becomes almost existential for governments and citizens. They’re now seen as part of the solution and governments have turned to the policy-minded people in that community to help them secure their future economies and to protect their vital services. The security research community also needs the lawyers and civil servants – because they know how to write legislation, they know how to talk to politicians and they can fit everything into the jigsaw puzzle of existing regulation, making sure that everything works! So what I’ve also had reinforced in me is a huge respect for the broad range of skills that are needed to actually get stuff done and most of those are not actually the engineering or security bit.

A lot of the current drive towards supporting product security is now unfortunately driven by fear. There is a big ticking clock when it comes to insecure connected devices in the market. The alarm attached to that ticking clock is catastrophe – it could be ransomware that as an onward impact causes large-scale deaths in short order or it could be major economic damage, whether deliberate or unintended. A ‘black swan of black swan events’ as my friend calls it. Whatever it is, it isn’t pretty. The initial warnings have been there for a while now from various cyber attacks and across a range of fronts, positive work has been taking place to secure supply chains, encourage ‘secure by design / default’ in the product development lifecycle and to increase resilience in networks – which is the right thing to do – the security should be commensurate with usage and in reality the whole world really, really relies on the internet for literally everything in their lives.

This is another factor in the success of current cyber security work around the world. I work with people from all corners of the earth, particularly in the GSMA’s Fraud and Security Group. Everyone has the same set of issues – there are fraudsters in every country, everyone is worried about their family’s privacy, everyone wants to be safe. This makes this topic less political in the IoT space than people would imagine and every country’s government wants their citizens to be safe. This is something that everyone can agree on and it makes standards setting and policy making a whole lot easier. With leadership from a number of countries (not just the UK, but I have to say I’m incredibly proud to be British when it comes to the great work on cyber security), we’re seeing massive defragmentation in standards such that we are seeing a broad global consensus on what good looks like and what we expect secure products and services to look like. If you step back and think about it – thousands and thousands of individuals working to make the world a safer place, for everyone. So the acorn twenty years ago was actually lots of acorns and the oak tree is actually a forest.

So to everyone working on IoT security around the world I raise a glass – Cheers! and keep up the fantastic work.

My RSA talk on the UK’s Code of Practice for Consumer IoT Security in 2019.

Further reading:

The Real World is Not Real: Manipulation of Data and People in the Future Internet

On the 24th of January 2018 I gave my professorial lecture at York St John University in the UK. I decided to choose the subject of fakeness: the manipulation of data by attackers and its potential impact on people and machines if trusted. The transcript is below. I’ve added in additional links so you can see sources and informative links for further reading. I hope you enjoy the read, feel free to leave comments or tweet me.

The Real World is Not Real: Manipulation of Data and People in the Future Internet
David Rogers

Introduction

Thank you all for coming today to my inaugural lecture. I am honoured and humbled both to be in this position and that you are here today to listen. I’m also particularly grateful to Dr. Justin McKeown for engaging me in the first place with such a great university and for our Vice Chancellor Karen Stanton’s support.

We have been on a steady path towards more advanced artificial intelligence for a number of years now. Bots in the software sense have been around for a long time. They’ve been used in everything from online gaming to share dealing. The AI in computer chess games has been easily able to beat most users for many years. We still however have a long way to go towards full sentience and we don’t even fully understand what that is yet.

In the past couple of years we have seen the widespread use of both automation and rudimentary AIs in order to manipulate people, particularly it seems in elections.

Manipulation of people

I hate to use the term fake news, but it has taken its place in world parlance. The United States Senate is currently investigating the use of trolls by Russia in order to manipulate the course of the 2016 presidential election. Through the use of Twitter accounts and Facebook advertising, a concerted attempt was made to influence opinion.

Source: https://www.huffingtonpost.co.uk/entry/russian-trolls-fake-news_us_58dde6bae4b08194e3b8d5c4

Investigative journalist Carole Cadwalladar published a report in May 2017 entitled “The great British Brexit robbery: how our democracy was hijacked” which is an interesting read. While I should note that the report is the subject of legal complaints by Cambridge Analytica, there are certainly interesting questions to be asked as to why so much of the Leave campaigns’ money was ploughed into online targeting. The ICO is currently investigating how voters’ personal data is being captured and used in political campaigns.

Whatever the case, it seems that humans can easily be influenced and there are many academic papers on this subject.

It also seems that some technology companies have become unwittingly duped into and have actually profited from undermining free and fair elections by the manipulation of their targeted advertising at certain segments of populations. This represents a significant threat to democracy and is still ongoing right now where elections are taking place.

Fake news is not a new thing. In one example from the 2nd world war, the Belgian resistance created “Le Faux Soir” to replace the newspaper “Le Soir” and distributed slightly before the normal deliveries arrived. The faux Soir had lots of stories saying how badly the war was going for the Germans and nearly all of the copies were taken before anyone realised. Le Soir’s modern version has been attacked since then; by IS supporters in 2015 although like many hacks on media sites, it was more a form of defacement.

Source: https://jonmilitaria44.skyrock.com/2815465118-Le-FAUX-Soir.html

What I’m really interested in at the moment is the next stage of all of this; the manipulation of data and therefore machines that use or rely on it.

Disconnecting real world objects and modification of real world things

The Internet of Things or IoT is something that I take a particular interest in and I have spent a lot of time looking at different aspects of security. We’ve had lots of interesting attacks on IoT and the so-called cyber-physical world on everything from children’s toys to power plants.

Whilst some attacks are entirely logical and we generally threat model for them, the increasing connectedness of the world has meant that “lone wolf” attackers have the potential to amplify their attacks in such a way that they could not before. Some people used to ask me in talks about why people would want to do things like attack remote insulin pumps or pacemakers. I would respond with the question, “why do people put glass in baby food?”

The tampering and contamination of foods has happened on a regular basis. The psychology behind such attacks is surely one of the powerless gaining power; in those cases often causing a product recall, financial harm and embarrassment to a disgruntled employee’s company or, in the biggest case in the UK in 1989, to extort money from the manufacturer.

In the Internet of Things, you often hear about these types of threats in terms of catastrophic impact, for example “stopping all the cars on a motorway”, “all the diabetics in the US”, “anyone with a pacemaker” or “poisoning water supplies”. Threats are made up of a combination of Intent, Capability and Opportunity. Without any one of the three, an attack is unlikely to be successful. One side effect of the IoT is that it is giving a new breed of attackers opportunity.

Connecting my house to the internet gives someone in another country the theoretical ability to
watch webcams in my house – if you don’t believe me, have a look at a tool called Shodan (the capability) and search for webcams (your intent).

In January 2018, Munir Mohammed and his pharmacist partner were convicted of planning a terrorist attack. Of some concern, it was also reported that he had been researching Ricin and he had a job making sauces for ready meals for Tesco and Morrisons in a food factory. The media made a strong case that there was a link between his work there although there was no evidence to that effect. It does raise a frightening prospect however; the potential for an intent of a different sort – not the traditional type of disgruntled employee wishing to taint food and scare parents, but a motivation caused by radicalisation. An attacker of this nature is eminently more dangerous than the disgruntled employee.

The employee intends to harm the employer rather than a child (in most cases), whereas the terrorist actually intends to kill the people who consume the product through pure hatred (in the case of ISIS). It is therefore entirely conceivable that this could be a high impact, but low likelihood risk that needs to be considered in the Internet of Things.

There have been incidents pointing towards potentially catastrophic future events. In 2016, the US mobile network operator Verizon reported that a hacktivist group linked to Syria managed to change valves which controlled the levels of chemicals in tap water, releasing dangerous levels of the chemical into the water supply. The scary part of this is that anyone with an IP connection, in any part of the world, with the intent, capability and opportunity could theoretically execute this on a number of connected water treatment systems. So it becomes a different numbers game – the number of potential attackers rises dramatically (many with different motivations), as does the amount of system exposure.

Many of these industrial systems run on very old equipment which is not segregated and can easily be reached via the internet. There are other examples too – in 2014, the German Federal Office of Information Security published details of a hack which caused serious damage to a blast furnace in a steel mill, after a worker was compromised via a phishing email.

What I’m trying to say here is that people with ill intentions whether it be nation states or terrorists could and are in some cases attempting to attack real world infrastructure using our connectedness and technology to achieve their aims.

Inventing the real world

Most of the attacks we have talked about are targeting and modification of real world things via the internet – from the inside out. But what about from the outside in? What if the real world wasn’t real, as presented to humans and other systems?

A couple of weeks ago, Oobah Butler, posted his article on how he managed to make his fake restaurant “The Shed At Dulwich” the most popular restaurant in London. He had the idea after he had previously been paid £10 for each positive comment he gave real restaurants, transforming their fortunes. Eventually Oobah decided to open the shed for real for one night and ended up hiring in actors and serving up microwave meals to the guests – some of who tried to book again! The restaurant has been deleted from Tripadvisor now, but I managed to get a cached version of it as you can see.

The Shed at Dulwich (Google Cache)

Fake reviews are of course extremely common on the internet. A whole industry of “click farms” in low income countries has grown up to both generate clicks to generate advertising revenue and to provide 5 star reviews and comments of products, applications and services over the internet. These fake clicks have been provided by software “bots” and through virtual machine emulations of smartphones. As detection gets better, humans are engaged in this activity including providing text comments on blogs and reviews.

Source: https://www.mirror.co.uk/news/world-news/bizarre-click-farm-10000-phones-10419403

Major retail companies are employing AI “chatbots” to respond to Twitter or Facebook complaints, with customers engaging and becoming enraged by their responses when they get it wrong, not realising that they’re not talking to a human being. In the case of Microsoft’s machine learning chatbot Tay, it made racist and nazi comments in less than a day based on things that were said to it and the dataset it was using for training.

You can see that the impression of a real world is already being significantly harmed by automation controlled by people with less than fair intentions. But what about truly trying to fake real world data in order to make humans take action, or not take action when they need to?

Faking real world data

I will give you a simple example which I show to some of my IoT security students. There is a popular TV and movie trope about spoofing the information from CCTV cameras.

Tvtropes.org talks about two basic examples – “the polaroid punk” – where a picture is taken and put over the camera while the hero goes about his business (you may remember this from the Vatican scene in “Mission Impossible 3”) and the second being “the splice and dice” – looping camera footage (which you may remember from the film “Speed”).

In my version of this trope, instead of doing this with CCTV imagery, we use the same trope against IoT data going back to a console. The specific example I chose was Connected Agriculture. There are some very expensive crops such as Almonds or Pistachios that will lose their crop if starved of water and could potentially kill the trees. It can take between 5 and 12 years for new trees to produce so ir
rigation has to be managed carefully.

As farms increasingly rely on the Internet of Things to provide accurate sensing and measurement, as well as control, it is entirely conceivable that this could be attacked for a variety of reasons. An attack could involve quietly monitoring the data being sent by sensors back to the farm, then stealthily taking over that reporting with fake data. Once the hijack has taken place, the attacker could reduce the irrigation or stop it entirely, as long as the farmer is not aware. With data being sent back that looks real, it could be a long time before the attack is discovered and it may never be detected. It may not even need to take place at the sensor end, it may be enough to hijack the reporting console or IoT hub where the data is aggregated.

With thousands of acres of land to manage, there is an increasing reliance on the screen of reported data in the office, rather than direct inspection on the ground to corroborate that the data is indeed real. Food security is a real issue and it is well within the means of nation states to execute such attacks against other countries or for the attack to be a targeted against in order to manipulate the market for Almonds.

Of course this is just an example, but the integrity of eAgriculture is an example of something which can be directly linked to a nation’s security.

In the military and intelligence domain, countries around the world are taking a more active and forthright approach to cyber security rather than just defending against attacks. This is also known as “offensive cyber”. In December 2017, AFP reported that Colonel Robert Ryan of a Hawaii based US Cybercom combat team had said the cyber troops’ mission is slightly different to the army’s traditional mission. He said: “Not everything is destroy. How can I influence by non-kinetic means? How can I reach up and create confusion and gain control?”. The New York Times had previously reported that Cybercom had been able to imitate or alter Islamic State commanders’ messages so that they directed fighters to places where they could be hit by air strikes.

Just yesterday, the UK’s Defence Chief of General Staff, Nick Carter said that the UK needs to keep up with unorthodox, hybrid warfare encompassing cyber attacks.

From the attacks that have been attributed to nation states, many of these have attacked civilian infrastructure, some of them sophisticated, others not so.

A colleague at Oxford University, Ivan Martinovic has written about the issues with air traffic control data being modified in different types of systems. Many of these systems were created many years ago and the authors of a joint paper in 2016 on “Perception and Reality in Wireless Air Traffic Communications Security” describe the threat model as “comparatively naïve”. In the same paper, they asked both Pilots and Air Traffic Controllers, their positions in hypothetical scenarios. 10.75% of the pilots surveyed said they didn’t know the effect if wrong label indications were shown up on their in-flight Traffic Collision and Avoidance Systems screens.

These systems are designed to prevent mid-air collisions. 83.3% of air traffic controllers surveyed said there would be a major loss of situational awareness if information or whole targets were selectively missing from an air traffic control radar screen. It is not difficult to imagine the impact of removing planes from either pilot or air traffic controller screens or the alternative which is to flood them with data that doesn’t exist, particularly in a busy area like Heathrow and during bad weather. The panic and loss of awareness in situation like that could cause very severe events. Other such attacks have been theorised against similar systems for ships at sea.

The future danger that we face from this kind of underlying vulnerability is that the person at the computer or controls in the future will not be a person, it will be another computer.

Disconnection from the matrix

Maybe all it takes is an attack to disconnect us from the matrix entirely.

In December 2017, a number of warnings were aired in the media and at events that the Russians may try to digitally starve the UK by cutting the fibre optic cables that connect us to the rest of the world. This is not a unique threat to us; all the countries of the world rely on the interconnectedness of the internet. Some landlocked countries are really at the mercy of some of their neighbours when it comes to internet connections and it can be used as a political weapon in the same ways that access to water and the building of dams is and has been for a number of years.

The Battle for the Atlantic in 1941 was all about cutting off supplies to Britain through U-Boat warfare.

Telecommunications cable-cutting has long been an issue in warfare but for traditionally different reasons. Barbara Tuchman’s book, The Zimmermann Telegram explains that in 1914, on the eve of the Great War, sealed orders were opened at Cable & Wireless, directing them to cut and to remove a portion of submarine cable in the English Channel. This forced the Germans to use insecure wireless for their messaging.

The situation for submarine cables today is different. Our whole economies are dependent on the resilience of cable which is a few inches wide. The original UK National Cyber Security Strategy in 2011 stated that around 6% of UK GDP was generated by the internet, but this probably does not capture the true nature of how human beings live their lives today and the downstream reliance on everything from purchasing goods and services to supply chain management, communication and transportation as well as increasing government services for welfare and taxation; the list is almost endless. Each and every one of these touch points with the internet has an onward real world effect.

The back-end of many of these systems are cloud computing services, many of which are hosted in other countries with very little domestic UK infrastructure to support going it alone. Our reliance is based on a globalised world and it is increasing; just as the world powers shift to a more parochial, protectionist approach.

The concept of “digital sovereignty” is something that governments around the world are happy to promote because it makes them feel that they’re more in control. Indeed the Russians themselves had stress-tested their own networks to check their preparedness and resilience. They failed their own tests, setting themselves on a path to hosting everything in their own country and further balkanising the internet. Russia’s foreign policy in this respect is clear. Foreign Minister Sergey Lavrov has repeatedly stated that Russia wishes to see “control” of the internet through the UN. The country has a long-held paranoia about perceived western control of the internet and wishes to redress the balance of power.

Economic data, the 24 hour news cycle and market shocks

 

Source: https://www.telegraph.co.uk/finance/markets/10013768/Bogus-AP-tweet-about-explosion-at-the-White-House-wipes-billions-off-US-markets.html

In April 2013, the Associated Press’s Twitter account caused a brief, massive stock plunge apparently amounting to 90 billion pounds when it tweeted: “Breaking: Two Explosions in the White House and Barack Obama is injured.”.

The situation was quickly dealt with and the market recovered quickly as the White House immediately clarified the situation, but this demonstrates the impact and influence that a trusted news source can have and the damage that can happen when it is hacked. After the event, AP staff admitted that there had been a targeted phishing campaign against them but it is unclear who the attacker was. Other news organisations have also been regularly targeted in similar ways.

Many automated systems rely on Twitter as a source of “sentiment” which can be linked to machine learning algorithms for stock market forecasting with sentiment analysis. As such, the sell-off in many cases may have been automated, particularly as the “news polarity” would have been negative.

An interesting paper on the subject of news headlines and their relative sentiment was published in March 2015 by the University of Minas Gerais in Brazil and the Qatar Computing Research Institute. The paper serves as a good description of how scores can be attributed to headlines of breaking news article tweets as to its news polarity.

Source: https://arxiv.org/pdf/1503.07921.pdf

As an aside, they showed that around 65% of Daily Mail articles were negative, but I guess you knew that anyway!

Source: https://arxiv.org/pdf/1503.07921.pdf

Amplification of events by stock market algorithms have been shown in other events. The 2010 “Flash Crash” caused 500 million dollars of losses in around half an hour. Part of the reason for this was the trading of derivatives – ETFs – Exchange Traded Funds which form baskets of other assets. The value of the ETFs were disconnected from the underlying value of the assets.

Navinder Singh Sarao, also known as the “Hound of Hounslow” modified automated software to enable him to make and cancel trades at a high frequency which would drive prices in a particular direction. The US Justice department claimed he made 26 million pounds over 5 years. There was another Flash Crash in August 2015 caused by such high frequency trades. By this time, “Circuit breakers” had been installed – these caused the stock market to halt over 1200 times in one day.

The recent “This is not a drill” missile alert in Hawaii which was accidentally sent by an employee was quickly reversed, but not before many people had panicked. That event happened on a Saturday and it is likely that it would have had some impact on the financial markets had it been during the week. It is not difficult to imagine the havoc that could have been caused if such a system had been compromised through a cyber attack. Indeed it is likely that that this potential has been noted by North Korea.

Source: Twitter

Machine-led attacks

In 2016, at DEFCON in Las Vegas, which is the world’s biggest hacking conference, I was a witness to history. The DARPA Grand Challenge final of the world’s first all-machine hacking tournament took place using competing teams of supercomputers. Normally this kind of tournament takes place with competing humans attacking and defending systems. The key element of this is the fact that this represented a point in time in which everything changed. For years now we have seen the evolution of attacks with some level of machine assistance, but this was a whole new ball-game. The speed at which situations changed was beyond the rate at which humans could keep up, bearing in mind that these systems had not seen any of these attacks before.

In one case, a vulnerability known as “Heartbleed” was used and exploited to attack one of the DARPA Grand Challenge supercomputers. It was found, patched and used in an attack against the other machines in a short few minutes. All of this was possible because of the automation, machine learning and artificial intelligence in-built into each computer.

There is a defence aspect to this too. In the future, we need to consider how we best utilise “the machines” for defensive purposes. The technology available to us now and in the future will enable us to use machine learning at the edge of a network to provide intelligent intrusion prevention and security monitoring systems. Our attacks will largely be machine-led, which in a crude way happens now. One can imagine agents at the edge self-healing in a different
way – healing from security attacks and then getting that fix back and out to other elements of the network.

Team Shellphish who finished in third place, even open sourced their code so that other people could develop it further.

The problem for all of us is the constraints on such AIs. If we delegate the ability to respond offensively to agents that can respond more quickly than we can, what bounds do we put in place around that? Does the adversary play by the same ruleset? What if they break the rules? Research using AI to learn different types of games has often put rules in place to make the AI follow the rules of the game, rather than cheat (for example with DeepMind), but it could be programmed to break as many rules as possible in order to succeed, for example to get information from the underlying operating system to win the game.

Then we have AI versus humans. In some respects humans have the advantage – for example, we often have the ability to disconnect – to remove power or to disengage from a computer system. The AI has physical bounds but they may also be protected from attack by its designer.

As has been seen in games that have been learned by existing, adaptable artificial intelligence, once the rules are fully understood, AIs can easily beat human players.

In September 2017, Russian President Vladimir Putin stated in a speech that “Artificial intelligence is the future, not only for Russia, but for all humankind,”. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Elon Musk, the founder of Tesla and amongst a growing number of people concerned about the rise of AI and particularly about its military uses tweeted in response “Competition for AI superiority at national level mostly likely cause of WW3 imo”.

A letter was sent to the UN in August 2017 which was signed by the founders of 116 AI and robotics companies across the world called for the ban of autonomous weapons which would “permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”. It is a sad fact that an arms race is already underway, with some autonomous weapons already deployed.

Artificial intelligence will employed in many different domains, not just military and cyber.

AI in economics

In economics, artificial intelligence offers humanity some positives in that the complexities of particular problems can be addressed by systems, but all of this relies on good and valid input data. It also relies on the political viewpoint that is programmed into such systems. Transparency in such AIs is going to be crucial when it comes to implementation of policy.

The use of computers for economic modelling has a long history. If you’re interested in this topic, the documentary maker Adam Curtis has made some excellent programmes about it, including the series Pandora’s Box. Most of these attempts have failed – the soviet, paper-based planned system ended up with ludicrous situations such as trains full of goods going all the way to Vladivostok and being turned around and sent back to Moscow. In the United States, science and data based solutions to national problems such as the risk of nuclear war, via the RAND Corporation, using Von Neumann’s Game Theory and Wohlstetter’s “Fail-Safe” strategies, but they didn’t always work. Defence Secretary Robert McNamara’s reliance on data coming back from the ground in the Vietnam war undoubtedly led to bad decisions due to the lack of quality in the data. The same is true today. Datasets are often unreliable and inaccurate but much more so if they have been deliberately altered.

Revolutions will need to take place in the way in which data is gathered before government systems will ever be accurate and reliable; and to do so will also amount to a massive breach of citizen privacy. However, it could be the case that some corporations are already sat on much of the data. In 2008 Google announced in a paper in the journal Nature that they were able to “nowcast” or predict flu outbreaks based on searches for symptoms and medical products. They were able to accurately estimate this two weeks earlier than the US Center for Disease Control. Google Flu Trends ran for a few years, but was shown to fail in a big way in 2013. Corporate systems that operate on such sets of “big data” are often opaque. This can lead to deliberate bias or inaccurate outcomes. In Google’s case the AI they use has also recently been found to be racist in picture identification.

The UK’s Office of National Statistics operates under a Code of Practice for Official Statistics, which statistics must be compliant with (and is currently under revision). This is also consistent with the UN’s Fundamental Principles of Official Statistics and the European Code of Practice on the subject. It is designed such that the statistics that are produced are honest and can’t be manipulated to suit a particular government’s views or objectives.

In the future, governments will need to ensure that artificial intelligence based systems operating on the statistics to steer policy are compliant with such codes of practice, or a new codes adopted.

What effect would the manipulation of national statistics have? If governments were making decisions based on falsified or modified data, it could have a profound financial, economic and human impact. At the moment, the safeguards encoded in the Code of Practice rely on the integrity of individuals and their impartiality and objectivity.

As we increasingly rely on computers to make those recommendations, what assurance do we have that they operate in-line expectations and that they have not been compromised? In theory, a long-term adversary could create a completely different national picture entirely by slightly skewing input data to national statistics in particular areas meaning that they bear no relation to the real world.

When algorithms go wrong

AIs will co-exist with other AIs and this could generate some disturbing results.

During Hurricane Irma in September 2017, automatic pricing algorithms of airline tickets caused prices to rise in-line with demand, resulting in exorbitant demands on evacuating residents. Such “yield management systems” are not designed to handle such circumstances and have no concept of ethics or situational awareness. Airlines were forced to manually backtrack and cap prices manually after a twitter backlash.

Source: Twitter @LeighDow

In 2011, two third-party Amazon merchants ended up in an automated pricing war over a biology book because of competing algorithms. The pricing reached over 23.5 million dollars for a book that was worth about 100 dollars. The end results of such situations where there is interaction with multiple other algorithms which may rely on that algorithmic output data may be non-deterministic and may be catastrophic where physical real world effects take place based on these failures. Humanity could be put in a position where they cannot control the pace of these combined events and widespread disruption and destruction could take place in the real world.

Source: https://www.michaeleisen.org/blog/?p=358

The human rebellion

As machines start to take over complex functions such as driving, we have already started to see humans rebel.

Artist James Bridle demonstrated this by drawing an unbroken circle within a broken one, representing a line a car could cross. This created a situation which a self-driving car could theoretically enter but couldn’t get out of – a trap. This of course was art, but researchers at a group of US universities worked out how to trick sign-recognition algorithms simply by putting stickers on stop signs.

Source: https://www.autoblog.com/2017/08/04/self-driving-car-sign-hack-stickers/

It is certainly likely that this real-world gaming of artificial intelligence is going to happen and at least in the early days of AI, it will probably be easy to do.

Nudge and Dark Patterns

Governments around the world are particular fans of “nudge techniques”; imagine if they were able to design the AI to be adaptable with nudge – or social engineering as we call it in the security world? What I see with technology today is a lot of bad nudge – manipulation of smartphone or website users into behaviours that are not beneficial to them, but are beneficial to the system provider. Such techniques are widespread and are known as “dark patterns”, a concept originally conceived by user experience expert Harry Brignull. So what is to stop these techniques being inserted into or used against artificial intelligence in order to game the user or to nudge the system in a different direction? Some of this used to be known as “opinion control” and in countries like China the government is very keen to avoid situations like the Arab Spring which would bring chaos to the country. Maintaining stability and preventing free-thinking and freedom of expression are generally the order of the day and AI will assist them in their aims.

The danger of the Digital Object Architecture

One solution that has been proposed to the problem of falsified data on the internet is to use something called the Digital Object Architecture. In theory this technology would allow for the identification of every single object and allow for full traceability and accountability. It is difficult to imagine, but a future object-based internet could make the existing internet look minuscule.

It is a technology being promoted by a specialised UN agency called the ITU. They are meeting at this moment talking about it. The main proponents are authoritarian regimes such as China, Russia and Saudi Arabia. This is because as I mentioned earlier, it brings “control” to the internet. This is clearly the polar opposite of freedom. It is an extremely pervasive technology in theory, which is also largely untested.

Shortly before Robert Mugabe’s detention in Zimbabwe, there had been issues with food shortages and concerns about the state of the currency which had triggered panic buying and a currency crash. These issues were genuine, they were not false, but it certainly wasn’t convenient for Zimbabwe or Mugabe. He reportedly announced at a bi-national meeting with South Africa that cyber technology had been abused to “undermine our economies”. The South African communications minister clarified that Mugabe intended to use the Digital Object Architecture. The Minister said “It helps us identify the bad guys. The Zimbabweans are interested in using this.”.

So if we look at the countries and fans of this technology, it seems that it is about controlling messages and about controlling truth. The Digital Object Architecture is being heavily promoted by some countries around the world that appear to both fear their own people as the way to maintain order and control and to repress free-thinking and freedom of expression. This is quite a depressing situation – the technology itself is not necessarily the problem, it is the way that it is used, who controls it and the fact that it affords no ability for privacy or anonymity for citizens.

So we must seek other solutions that do maintain the properties of confidentiality, integrity and authenticity of data in order to support widespread artificial intelligence use in economies, but technology that cannot be abused to repress human individuality and freedom. I have to mention Blockchain here. I have read a number of articles that attempt to paint blockchains as the perfect solution to some of the problems I’ve outlined here in relation to creating some kind of federated trust across sensors and IoT systems. The problems are very real, but I feel that the use of blockchains to solve some of these problems creates a number of very different,
other problems including scalability so they are by no means a panacea.

Conclusion

If we can’t trust governments around world not to abuse their own citizens, and if those same governments are pushing ahead with artificial intelligence to retain and gain greater power, how can we possibly keep AI in check?

Are we inevitably on the path to the destruction of humanity? Even if governments choose to regulate and make international agreements over the use of weaponry, there will always be states that choose to ignore that and make their artificial intelligence break the rules – it would be a battle between a completely ruthless AI and one with a metaphorical hand tied behind its back.

In an age where attribution of cyber attacks is extremely difficult, how can we develop systems that both prevent the gaming of automated systems through “false flag” techniques, masquerading as trusted friends but meddling and influencing decision making? It is entirely possible that attribution may be even more difficult due to the origins of systemic failure in economies being difficult to detect.

How do we manage the “balance of terror” (to use a nuclear age term) between nation states? What will happen when we cannot rely on anything in the world being real?

These are the problems that we need to address now as a collective global society, to retain confidence and trust in data and also the sensors and mechanisms that gather such data. There could be many possible algorithmic straws to break the camel’s back.

Artificial Intelligence is here to stay, but ethical and humane cyber security and robust engineering discipline is a part of keeping it sane. Maybe, just maybe, artificial intelligence will be used for the good of humanity and solving the world’s problems rather than for the last war we’ll ever have.

Thank you.

The Future of Cyber Security and Cyber Crime

David Wood kindly invited me to speak at the London Futurists cyber security and cyber crime event along with Craig Heath and Chris Monteiro. I decided to talk about some more future looking topics than I normally do, which was quite nice to do. The talks were videoed and linked below (my talk starts about 39:29). I should add that the Treaty of Westphalia was 1648, not 1642!:

Here are my slides:

Cyber Security in the Mobile World: MWC Lunchtime Seminar Series

I’ve been running a cyber session on behalf of UKTI and BIS for the past few years. The event has been an increasing draw as a hub for security and privacy discussion at Mobile World Congress. We have an absolutely stellar line-up this year, across three days of lunchtime sessions and I’m really looking forward to MCing! If you’re around at MWC, come along to the UKTI stand in Hall 7 (7C40) at the times below.

#MWC15

Cyber Security in the Mobile World: MWC Lunchtime Seminar Series

In the fourth year of our MWC Cyber Security in the Mobile World event, the topic remains at the top of the headlines. 2014 saw a large number of attacks which were both news-grabbing and serious. Are things getting better or are they going to get worse?

Securing the Internet of Things
Mon 2nd March
12:00 to 12:40
Location: Hall 7, UKTI stand 7C40

The Internet of Things (IoT) has exploded in the last year. Many machine-to-machine (M2M) and IoT devices being purchased by consumers and being implemented within technology from cars to chemical plants, are we adequately prepared to handle the increased cyber risk?

Introduction:

• Richard Parris, Intercede: Introduction to the Cyber Growth Partnership

Keynote speakers:

• Richard Parris, Intercede: The Role of SMEs in Securing IoT
• Marc Canel, Vice President of Security, ARM: Hardware security in IoT
• Svetlana Grant, GSMA: End to End IoT Security

Mobile Cyber Security for Businesses
Tues 3rd March
12:45 to 13:25
Location: Hall 7, UKTI stand 7C40

The Prime Minister recently said that 8 of 10 large businesses in Britain have had some sort of cyber attack against them. With a big increase in the number of mobile devices, how can businesses defend themselves, their data and their employees? What cyber standards are being developed and what enterprise security mechanisms are being put into the devices themselves?

4 person keynote panel, moderated by David Rogers:

• ETSI, Adrian Scrase, CTO
• Samsung, KNOX, Rick Segal, VP KNOX Group
• Good Technologies, Phil Barnett, Head of EMEA
• Adaptive Mobile, Ciaran Bradley

Innovation in Cyber Security: Secure by Default
Wed 4th March
11:40 to 12:20
Location: Hall 7, UKTI stand 7C40

Our speakers will get straight to the point by giving 3 minute lightning talks on a variety of innovations in cyber security.

1. Symantec, IoT Security, Brian Witten
2. W3C, Web Cryptography, Dominique Hazaël-Massieux
3. NCC Group, Innovative Security Assessment Techniques, Andy Davis
4. Plextek, Automotive Security, Paul Martin, CTO
5. SQR Systems, End-to-End Security for Mobile Networks, Nithin Thomas, CEO
6. CSIT, Queens University, Belfast, Philip Mills & David Crozier
7. Trustonic, Your Place or Mine? Trust in Mobile Devices, Jon Geater, CTO
8. NquiringMinds, Picosec: Secure Internet of Things, Nick Allott, CEO
9. Blackphone, Blackphone update, Phil Zimmermann
10. GSMA, The Future of Mobile Privacy, Pat Walshe

Security and Privacy Events at Mobile World Congress 2015

We’ve listed out some interesting Security and Privacy events from 2015’s Mobile World Congress in Barcelona. This year sees a general shift in topic focus to Software Defined Networking (SDN), Network Function Virtualisation (NFV) and Internet of Things (IoT). Security still isn’t a ‘core’ part of MWC – it doesn’t have a dedicated zone for example on-site, but as it pervades most topics, it gets mentioned at least once in every session!

Sunday 1st March 
1) Copper Horse Mobile Security Dinner
21:00 – Secret Location in Barcelona

Monday 2nd March
1) UKTI Cyber Security in the Mobile World lunchtime series: Securing the Internet of Things
12:00 – 12:40, Hall 7, Stand 7C40

14:00 – 15:30 Hall 4, Auditorium 3

3) Security and IdM on WebRTC
15:00 – 14:00 Spanish Pavilion (Congress Square)

4) Ensuring User-Centred Privacy in a Connected World
16:00 – 17:30 Hall 4, Auditorium 3

Tuesday 3rd March 
1) GSMA Seminar Series at Mobile World Congress: Mobile Connect – Restoring trust in online services by implementing identity solutions that offer convenience and privacy for consumers and enterprises 
09:00 – 12:00 Theatre 1 CC1.1

2) Mobile Security Forum presented by AVG 
11:45 – 14:00 – Hall 8.0 – Theatre District -Theatre D

3) UKTI Cyber Security in the Mobile World lunchtime series: Mobile Cyber Security for Businesses 12:45 – 13:25 Hall 7, Stand 7C40

4) Mobile, Mobility and Cyber Security
17:00 – 21:00 Happy Rock Bar and Grill, 373-385 Gran Via de les Corts Catalanes 08015

5) Wireless and Internet Security B2B Matchmaking Event 
18:30 – 22:00 CTTI Carrer Salvador Espriu, 45-51 08908 L’Hospitalet de Llobregat

Wednesday 4th March 
1) UKTI Cyber Security in the Mobile World lunchtime series: Innovation in Cyber Security: Secure by Default 
11:40 to 12:20 Hall 7, Stand 7C40

2) The Explosion of Imaging 
14:00 – 15:00 Hall 4, Auditorium 5

3) The New Security Challenges: Perspectives from Service Providers
16:30 – 17:30 Hall 4, Auditorium 4

Thursday 5th March 
1) Everything is Connected: Enabling IoT
11:30 – 13:00 Hall 4, Auditorium 2

If you’d like a meet up with the Copper Horse team to talk mobile security, IoT or drones, please drop us an email or tweet us @copperhorseuk. We’ll also be demonstrating our progress on securing IoT in the Picosec project on the NQuiringMinds stand in Hall 7: 7C70.

 Picosec Project

Feel free to leave a comment with information on any presentations or events we may have missed and we’ll look to add them.

Note: update 13/02/15 to correct Monday time order and add Quobis event.

Security and Privacy Events at Mobile World Congress 2014

Here’s a list of the main security and privacy related events at Barcelona (some of which I’ll be speaking at). You’ll need a specific pass to get into some of them and that is shown next to the event.

Sunday 23rd February

1) Copper Horse Mobile Security Dinner
21:00 – Secret Location in Barcelona

Monday 24th February

1) Mobile Security Forum presented by AVG
12:15-14:30 – Hall 8.0 – Theatre District -Theatre F
2) Mobile Security Forum presented by FingerQ
14:30-16:45 – Hall 8.0 – Theatre District -Theatre F

Tuesday 25th February

1) Secure all the things! – the changing future of mobile identity, web, policy and governance
10:00-12:00 (09:15 for networking) UKTI / ICT KTN seminar – in the main conference area, CC1 Room 1.2
2) GSMA Personal Data Seminar (with the FIDO Alliance)
11:00-14:30 Room CC 1.1
3) Global Mobile Awards 2014 – Category 6d – Best Mobile Identity, Safeguard & Security Products/Solutions [Gold passes only]
14:30-16:30 – Hall 4, Auditorium 1

Wednesday 26th February

1) Cyber Security Workshop: The Role of the Mobile Network Operator in Cyber Security [Ministerial Programme Access only]
15:30–16:30 – Minsterial Programme, Hall 4, Auditorium B

Thursday 27th February

1) Privacy – Mobile and Privacy – Transparency, choice and control: building trust in mobile
11:00-13:00 – GSMA Seminar Theatre 2 – CC1.1

Of course plenty of the other presentations have security aspects – all the Connected Home, mHealth and Intenet of Things talks to mention but a few! Also, if you’d like to meet me, you’ll see me at a few of these events or you can email to make an appointment out there.

Please feel free to let me know in the comments if I’ve missed any.

Shiny Expensive Things: The Global Problem of Mobile Phone Theft

I was kindly invited down to Bournemouth University the other day by Shamal Faily, to give a talk as part of their Cyber Seminar series. I decided to talk about a quite hot topic which I’m very familiar with, mobile phone theft. The slides are updated from an earlier talk, but cover some of the political involvement in 2012/13 and some information on recent industry action and what should happen next.

9th ETSI Security Workshop

In January 2014, it’ll be the 9th ETSI Security Workshop, in Sophia Antipolis in the south of France. I’ve always found the event really interesting and have spoken there a couple of times myself.

There’s a call for presentations that’s still open until the 11th of October, so if you’re interested in security and mobile, why not put in an abstract? The topics are really broad-ranging (which is part of the appeal). This year’s include:

1. Machine-to-Machine Security
2. Critical infrastructure protection
3. Cybersecurity
4. Analysis of real world security weaknesses
5. Next Generation Networks security
6. Mobile Telecommunications systems
7. RFID and NFC Security issues
8. Privacy and Identity Management
9. Cryptography and Security algorithms
10. Security in the Cloud
11. Smart city security (energy, transport, privacy, …)
12. Trusted Security (services and platforms)
13. Security Indicators/Metrics
14. Academic research and Innovation
15. Device and smart phones security
16. Malware detection and forensics

More details here: http://www.etsi.org/news-events/events/681-2014-securityws

 

An interview with a tech journalist

I was slightly misquoted in an article yesterday on mobile malware, so I thought I’d re-post my exact responses to the journalist as I spent a fair amount of time out of my evening to respond to the request instead of relaxing! With Mobile World Congress coming up, some of the topics covered are relevant to things that will be discussed in Barcelona.

Good tech journalism?

My comments were in response to a BlueCoat Systems report on mobile malware that came out the on the 11th of February. I didn’t get the chance to see the report until the very end, so my last comment is based on my skim read of the report. The questions you see below are from the journalist to me.

Here was my response (me in blue):

Here are my responses, let me know if you need anything else. I didn’t read the report yet.
They are marked [DAVID]:

David –

I’m doing a story on a recent report from Blue Coat about mobile malware. No link yet.

My questions, if you have a few minutes:

It predicts that delivery of mobile malware with malnets will be a growing problem this year. Agree? Why or why not?

[DAVID] It’s possible, but the question is really ‘where’. Most mobile malware has taken root in places like China and Russia where there has been traditionally a lack of official app stores, (which has only recently changed). It’s like the wild west out there with a complete lack of controls on the ingestion side to check that developers aren’t peddling malware and on the consumer side because the devices are outside the ‘safe’ app store world we see in the West.

So we almost have two worlds at the moment: the first is the western world, mainly the Europe and the US where generally no-one gets infected (a tiny, tiny percentage of maliciousness gets through the official app store checks or gets intentionally side-loaded by the user, usually when they’re trying to get pirated software!). The second is the vast majority of the rest of the world, usually poorer countries where the controls and regulations on piracy and malware are lax. It is like putting a street market next to a high-end city shopping mall. The mobile industry isn’t static and will continue to evolve in terms of security and threat management both on the network and device side when it comes to the potential for botnets (at least in the more controlled environment of the West).

It says mobile devices are still relatively secure at the OS level, but that users are “set up to fail” because it is more difficult to avoid phishing –  URL and links are shortened, passwords are visible to an onlooker when you enter them – apps are not well vetted and mobile versions of websites are often hosted by third parties, making it difficult to tell which are legit. Do you agree? Why or why not? And if you do agree, is there anything developers ought to change?

[DAVID] Mobile OSs and their underlying hardware are getting very advanced in terms of security which is great news. The problem is that there hasn’t been enough invested into educating developers about how to develop secure software and in most cases the tools and libraries they use are not designed to help them make the right security decisions, resulting in very basic flaws which have serious security consequences (for example poor implementation of SSL). For some, it is just too difficult or too much effort to bother putting security in from the start. We need to break down that kind of mentality and I think we really need to improve considerably in terms of ‘cyber’ security skills around for mobile developers. In terms of usability and the lack of screen real-estate, then yes developers have a role to play in helping the user make the decision they want to – some QR readers now present the ‘real’ URI behind a shortened one in order that the user can decide whether that was what they were expecting.

Users can be very impulsive when it comes to mobile, so you have to try and save them from themselves, but balance this with not resorting to bombarding them with prompts. Human behaviour dictates that we’ll susceptible to social engineering and will get over any hurdle presented to us if the prize is worth enough (something which is called the ‘dancing pigs’ problem). This is a real problem for both the OS and application developers. One thing that hasn’t really been deployed yet in the mobile world is trusted 3rd party management of policy. Users could choose a policy provider they trust to take the security management problem away from them. Obviously it can’t solve everything – the user has to take responsibility for their own actions at some point, but it will go a long way to resolving current issues permissions and policy with mobile platforms. The key to it all is that the user themselves has to be ultimately in charge of who they choose as a policy provider, not the operator, OS vendor or manufacturer.

There’ll always be attackers – the arbiters of trust in the mobile world have great responsibility to the millions of users out there and they themselves will become targets. I like the way that Google Bouncer (the automated security testing tool of Android apps being submitted by developers) has now become the target of attacks. To me, Google have forced attackers back away from the ‘Keep’ to the castle walls which can only be a good thing.

[I’ve lumped all these questions together]

The report says user behavior is the major weakness. Hasn’t this been the case all along?

Is there any truly effective way to change user behavior?

Is it possible for security technology to trump user weaknesses? If so, how?

[DAVID] Yes user behaviour is a weakness, but usability and security don’t usually sit well together. Developers should not just consider the technical security of an application but make security as friendly and seamless as possible from the user’s perspective. Resorting to prompting is usually the lazy way out and it pushes the burden of responsibility onto a user who probably doesn’t have a clue what you just asked them. I think OS level and web APIs could benefit from different design patterns – how about building in more intelligence to the responses? For example in a geolocation API a developer could ‘negotiate’ access by understanding what the user is comfortable with, all in the background. This avoids binary behaviour – for example: apps, that fall over if you don’t enable geolocation and users that never install apps that have geolocation. Both situations are not very good for helping the apps world advance and grow! However, if the user had been able to say that they were happy to share their location to city level, then the API could negotiate the request from a developer for location down to 1 metre by offering up city level instead. It would make for a much smoother world and would apply very easily across many different APIs.

If a user makes a critically bad decision, for example going to an infected website, I think Google have taken a strong lead in this respect by clearly showing to the user that really bad things are happening. Perhaps this could extend to other things on mobile, but we still need to get the basics of security right first from a technology and manufacturer’s perspective. I think some manufacturers have a long way to go to improve their security in this respect.

It says users will go outside VPNs if the “user experience” is not good within it. Is it realistic to expect enterprises to make their user experience better?

[DAVID] I think there are some interesting things coming along in terms of more ‘usable’ VPN technology, but usually the reason a VPN doesn’t work is a technical one that an ordinary user isn’t going to understand. They just want to get their job done and may take risky decisions because there are generally no visible security consequences. Most people in big companies have to deal with inflexible IT departments with inflexible policies. The intrusion into people’s own lives with the introduction of BYOD has muddled things further. I can certainly see more societal issues than security ones for the overall user experience – for example it might be very tempting for companies to start intruding on their users if there is a big industrial dispute involving unions. I don’t think these questions have properly hit companies yet, but mobile companies like RIM are looking at proper separation of work and personal life from a technical point of view, after that it is really down to the paperwork – the rules of use and the enforcement of those.

The report said Android is more vulnerable to attacks because of unregulated apps and the diversity of Android-based devices. What, if anything, can/should be done about that?

[DAVID] Well to a certain extent yes, but this has been vastly overplayed by anti-virus vendors desperate to get into mobile. The vast majority of maliciousness has been caused outside of the trusted app store world that we see in the US and the UK. I wouldn’t have designed the app signing process in the same way as the Android guys did, but then identification of individuals can be difficult anyway – I know lots of registration systems that can be broken just by photocopies of ‘official’ documents. Google wanted a more open ecosystem and you have to take the good with the bad. In terms of the diversity or fragmentation in Android, this could become an issue as device lifecycles get longer. The mobile industry is looking at the software update problem and rightly so. For the network operators it is going to be a question of how to identify and manage out those threats on the network side if it comes to it. I don’t think software upgrade issues are confined to Android but we don’t want any of the industry to lag behind because in the future there is nothing to say that huge distributed cross-platform (automotive, mobile, home) threats could exist, so we should pay attention to resilience and good cyber house-keeping now before it is too late.

Sorry to be on a deadline crunch – 5:30 p.m. EST today.

And my final comment to the journalist after I’d seen the report:

So just had a quick look through, only one final comment:

One thing that we all should remember is that the bad guys are not the mobile industry – it is the people who perpetrate malware, spam and scams. At the moment, cyber criminals run rings around law enforcement by operating across lots of countries in the world, relying on fragmented judicial systems and the lack of international agreements to take action. We should build the systems and laws through which we can arrest and prosecute criminals at a global level. 

I hope readers find it useful to see what I really wanted to say – I don’t claim to be right, but these are my opinions on the subjects in question. Readers should also understand how much effort sometimes gets put into helping journalists, with varying results 😦. If you want to read the original article and compare my responses with the benefit of context, you can find it at CSO online.