A Code of Practice for Security in Consumer IoT Products and Services

 

Today is a good day. The UK government has launched its Secure by Design report and it marks a major step forward for the UK for Internet of Things (IoT) security.

Embedded within the report is a draft “Code of Practice for Security in Consumer IoT Products and Associated Services”, which I authored in collaboration with DCMS and with input and feedback from various parties including the ICO and the NCSC.

I have been a passionate advocate of strong product security since I worked at Panasonic and established the product security function in their mobile phone division, through to the mobile recommendations body OMTP where, as the mobile industry we established the basis of hardware security and trust for future devices. We’re certainly winning in the mobile space – devices are significantly harder to breach, despite being under constant attack. This isn’t because of one single thing; it is multiple aspects of security built on the experiences of previous platforms and products. As technologies have matured, we’ve been able to implement things like software updates more easily and to establish what good looks like. Other aspects such as learning how to interact with security researchers or the best architectures for separating computing processes have also been learned over time.

Carrying over product security fundamentals into IoT

This isn’t the case however for IoT products and services. It feels in some cases like we’re stepping back 20 years. Frustratingly for those of us who’ve been through the painful years, the solutions already exist in the mobile device world for many of the problems seen in modern, hacked IoT devices. They just haven’t been implemented in IoT. This also applies to the surrounding ecosystem of applications and services for IoT. Time and again, we’re seeing developer mistakes such as a lack of certificate validation in mobile applications for IoT, which are entirely avoidable.

There is nothing truly ground-breaking within the Code of Practice. It isn’t difficult to implement many of the measures, but what we’re saying is that enough is enough. It is time to start putting houses in order, because we just can’t tolerate bad practice any more. For too long, vendors have been shipping products which are fundamentally insecure because no attention has been paid to security design. We have a choice. We can either have a lowest common denominator approach to security or we can say “this is the bar and you must at least have these basics in place”. In 2018 it just simply isn’t acceptable to have things like default passwords and open ports. This is how stuff like Mirai happens. The guidance addresses those issues and had it been in place, the huge impact of Mirai would simply not have occurred. Now is the time to act before the situation gets worse and people get physically hurt. The prioritisation of the guidance was something we discussed at length. The top three of elimination of the practice of default passwords, providing security researchers with a way to disclose vulnerabilities and keeping software updated were based on the fact that addressing these elements in particular, as a priority, will have a huge beneficial impact on overall cyber security, creating a much more secure environment for consumers.

We’re not alone in saying this. Multiple governments and organisations around the world are concerned about IoT security and are publishing security recommendations to help. This includes the US’s NIST, Europe’s ENISA and organisations such as the GSMA and the IoT Security Foundation. I maintain a living list of IoT security guidance from around the world on this blog.

So in order to make things more secure and ultimately safer (because a lot of IoT is already potentially life-impacting), it’s time to step things up and get better. Many parts of the IoT supply chain are already doing a huge amount on security and for those organisations, they’re likely already meeting the guidance in the code of practice, but it is evident that a large number of products are failing even on the basics.

Insecurity Canaries

Measuring security is always difficult. This is why we decided to create an outcomes-based approach. What we want is the ability for retailers and other parts of the supply chain to be easily able to identify what bad looks like. For some of the basic things like eliminating default passwords or setting up ways for security researchers to contact in the case of vulnerabilities, these can probably be seen as insecurity canaries – if the basics aren’t in place, what about the more complex elements that are more difficult to see or to inspect?

Another reason to focus on outcomes was that we were very keen to avoid stifling creativity when it came to security solutions, so we’ve avoided being prescriptive other than to describe best practice approaches or where bad practices need to be eliminated.

The Future

I am looking forward to developing the work further based on the feedback from the informal consultation on the Code of Practice. I support the various standards and recommendations mapping exercises going on which will fundamentally make compliance a lot easier for companies around the world. I am proud to have worked with such a forward-thinking team on this project and look forward to contributing further in the future.

Additional Resources

I’ve also written about how the Code of Practice would have prevented major attacks on IoT:

Need to know where to go to find out about IoT security recommendations and standards?

Here’s a couple more things I’ve written on the subject of IoT security:

Mobile World Congress 2018

View of the new Fira when flying in to Barcelona

This week it is Mobile World Congress, the biggest event in the mobile industry calendar. If you’re interested in meeting for a chat or just hearing about mobile and IoT security & privacy, I’ll be at the following places!

Sunday 25th February
6th GSMA IoT Summit
13:00-17:30
NH Collection Barcelona Tower Hotel

Copper Horse annual security dinner at a secret location in Barcelona
21:00-late (tweet me or message if you want to come along)

Monday 26th February
4YFN – “Hidden Threats and Opportunities to my Business”
Panelist: “Spotlight – How Data and Cyber Security can make or break a new business?”
16:15-17:15
4YFN (at the old Fira), Fira Barcelona Montjuïc, Av. Reina Maria Cristina

Tuesday 27th February
IoT Tuesday, hosted by Cellusys, supported by JT Group and the IoT Security Foundation
17:00-late Cellusys event – I’ll be giving an opening talk on behalf of the IoT Security Foundation, which will be: “The Ticking Clock”: why security in IoT is critical to how you run your business.
Tweet me if you want to attend.

Wednesday 28th February
16:30-17:30
Why Should we Trust your Digital Security?
Me having a fireside chat in this session with Jean Gonie, VEON: Data, Consumer Protection and the GDPR
Auditorium 3, Hall 4 (on-site at MWC)

I’ll be at a few other events and will generally be around and about the MWC main site all week so please feel free to get in contact. Speaking of Barcelona, we’re holding our next training, “Foundations of IoT Security” in May in the city. More details and sign-up can be found on the IoTSF website.

 

The Real World is Not Real: Manipulation of Data and People in the Future Internet

On the 24th of January 2018 I gave my professorial lecture at York St John University in the UK. I decided to choose the subject of fakeness: the manipulation of data by attackers and its potential impact on people and machines if trusted. The transcript is below. I’ve added in additional links so you can see sources and informative links for further reading. I hope you enjoy the read, feel free to leave comments or tweet me.

The Real World is Not Real: Manipulation of Data and People in the Future Internet
David Rogers

Introduction

Thank you all for coming today to my inaugural lecture. I am honoured and humbled both to be in this position and that you are here today to listen. I’m also particularly grateful to Dr. Justin McKeown for engaging me in the first place with such a great university and for our Vice Chancellor Karen Stanton’s support.

We have been on a steady path towards more advanced artificial intelligence for a number of years now. Bots in the software sense have been around for a long time. They’ve been used in everything from online gaming to share dealing. The AI in computer chess games has been easily able to beat most users for many years. We still however have a long way to go towards full sentience and we don’t even fully understand what that is yet.

In the past couple of years we have seen the widespread use of both automation and rudimentary AIs in order to manipulate people, particularly it seems in elections.

Manipulation of people

I hate to use the term fake news, but it has taken its place in world parlance. The United States Senate is currently investigating the use of trolls by Russia in order to manipulate the course of the 2016 presidential election. Through the use of Twitter accounts and Facebook advertising, a concerted attempt was made to influence opinion.

Source: https://www.huffingtonpost.co.uk/entry/russian-trolls-fake-news_us_58dde6bae4b08194e3b8d5c4

Investigative journalist Carole Cadwalladar published a report in May 2017 entitled “The great British Brexit robbery: how our democracy was hijacked” which is an interesting read. While I should note that the report is the subject of legal complaints by Cambridge Analytica, there are certainly interesting questions to be asked as to why so much of the Leave campaigns’ money was ploughed into online targeting. The ICO is currently investigating how voters’ personal data is being captured and used in political campaigns.

Whatever the case, it seems that humans can easily be influenced and there are many academic papers on this subject.

It also seems that some technology companies have become unwittingly duped into and have actually profited from undermining free and fair elections by the manipulation of their targeted advertising at certain segments of populations. This represents a significant threat to democracy and is still ongoing right now where elections are taking place.

Fake news is not a new thing. In one example from the 2nd world war, the Belgian resistance created “Le Faux Soir” to replace the newspaper “Le Soir” and distributed slightly before the normal deliveries arrived. The faux Soir had lots of stories saying how badly the war was going for the Germans and nearly all of the copies were taken before anyone realised. Le Soir’s modern version has been attacked since then; by IS supporters in 2015 although like many hacks on media sites, it was more a form of defacement.

Source: https://jonmilitaria44.skyrock.com/2815465118-Le-FAUX-Soir.html

What I’m really interested in at the moment is the next stage of all of this; the manipulation of data and therefore machines that use or rely on it.

Disconnecting real world objects and modification of real world things

The Internet of Things or IoT is something that I take a particular interest in and I have spent a lot of time looking at different aspects of security. We’ve had lots of interesting attacks on IoT and the so-called cyber-physical world on everything from children’s toys to power plants.

Whilst some attacks are entirely logical and we generally threat model for them, the increasing connectedness of the world has meant that “lone wolf” attackers have the potential to amplify their attacks in such a way that they could not before. Some people used to ask me in talks about why people would want to do things like attack remote insulin pumps or pacemakers. I would respond with the question, “why do people put glass in baby food?”

The tampering and contamination of foods has happened on a regular basis. The psychology behind such attacks is surely one of the powerless gaining power; in those cases often causing a product recall, financial harm and embarrassment to a disgruntled employee’s company or, in the biggest case in the UK in 1989, to extort money from the manufacturer.

In the Internet of Things, you often hear about these types of threats in terms of catastrophic impact, for example “stopping all the cars on a motorway”, “all the diabetics in the US”, “anyone with a pacemaker” or “poisoning water supplies”. Threats are made up of a combination of Intent, Capability and Opportunity. Without any one of the three, an attack is unlikely to be successful. One side effect of the IoT is that it is giving a new breed of attackers opportunity.

Connecting my house to the internet gives someone in another country the theoretical ability to
watch webcams in my house – if you don’t believe me, have a look at a tool called Shodan (the capability) and search for webcams (your intent).

In January 2018, Munir Mohammed and his pharmacist partner were convicted of planning a terrorist attack. Of some concern, it was also reported that he had been researching Ricin and he had a job making sauces for ready meals for Tesco and Morrisons in a food factory. The media made a strong case that there was a link between his work there although there was no evidence to that effect. It does raise a frightening prospect however; the potential for an intent of a different sort – not the traditional type of disgruntled employee wishing to taint food and scare parents, but a motivation caused by radicalisation. An attacker of this nature is eminently more dangerous than the disgruntled employee.

The employee intends to harm the employer rather than a child (in most cases), whereas the terrorist actually intends to kill the people who consume the product through pure hatred (in the case of ISIS). It is therefore entirely conceivable that this could be a high impact, but low likelihood risk that needs to be considered in the Internet of Things.

There have been incidents pointing towards potentially catastrophic future events. In 2016, the US mobile network operator Verizon reported that a hacktivist group linked to Syria managed to change valves which controlled the levels of chemicals in tap water, releasing dangerous levels of the chemical into the water supply. The scary part of this is that anyone with an IP connection, in any part of the world, with the intent, capability and opportunity could theoretically execute this on a number of connected water treatment systems. So it becomes a different numbers game – the number of potential attackers rises dramatically (many with different motivations), as does the amount of system exposure.

Many of these industrial systems run on very old equipment which is not segregated and can easily be reached via the internet. There are other examples too – in 2014, the German Federal Office of Information Security published details of a hack which caused serious damage to a blast furnace in a steel mill, after a worker was compromised via a phishing email.

What I’m trying to say here is that people with ill intentions whether it be nation states or terrorists could and are in some cases attempting to attack real world infrastructure using our connectedness and technology to achieve their aims.

Inventing the real world

Most of the attacks we have talked about are targeting and modification of real world things via the internet – from the inside out. But what about from the outside in? What if the real world wasn’t real, as presented to humans and other systems?

A couple of weeks ago, Oobah Butler, posted his article on how he managed to make his fake restaurant “The Shed At Dulwich” the most popular restaurant in London. He had the idea after he had previously been paid £10 for each positive comment he gave real restaurants, transforming their fortunes. Eventually Oobah decided to open the shed for real for one night and ended up hiring in actors and serving up microwave meals to the guests – some of who tried to book again! The restaurant has been deleted from Tripadvisor now, but I managed to get a cached version of it as you can see.

The Shed at Dulwich (Google Cache)

Fake reviews are of course extremely common on the internet. A whole industry of “click farms” in low income countries has grown up to both generate clicks to generate advertising revenue and to provide 5 star reviews and comments of products, applications and services over the internet. These fake clicks have been provided by software “bots” and through virtual machine emulations of smartphones. As detection gets better, humans are engaged in this activity including providing text comments on blogs and reviews.

Source: https://www.mirror.co.uk/news/world-news/bizarre-click-farm-10000-phones-10419403

Major retail companies are employing AI “chatbots” to respond to Twitter or Facebook complaints, with customers engaging and becoming enraged by their responses when they get it wrong, not realising that they’re not talking to a human being. In the case of Microsoft’s machine learning chatbot Tay, it made racist and nazi comments in less than a day based on things that were said to it and the dataset it was using for training.

You can see that the impression of a real world is already being significantly harmed by automation controlled by people with less than fair intentions. But what about truly trying to fake real world data in order to make humans take action, or not take action when they need to?

Faking real world data

I will give you a simple example which I show to some of my IoT security students. There is a popular TV and movie trope about spoofing the information from CCTV cameras.

Tvtropes.org talks about two basic examples – “the polaroid punk” – where a picture is taken and put over the camera while the hero goes about his business (you may remember this from the Vatican scene in “Mission Impossible 3”) and the second being “the splice and dice” – looping camera footage (which you may remember from the film “Speed”).

In my version of this trope, instead of doing this with CCTV imagery, we use the same trope against IoT data going back to a console. The specific example I chose was Connected Agriculture. There are some very expensive crops such as Almonds or Pistachios that will lose their crop if starved of water and could potentially kill the trees. It can take between 5 and 12 years for new trees to produce so ir
rigation has to be managed carefully.

As farms increasingly rely on the Internet of Things to provide accurate sensing and measurement, as well as control, it is entirely conceivable that this could be attacked for a variety of reasons. An attack could involve quietly monitoring the data being sent by sensors back to the farm, then stealthily taking over that reporting with fake data. Once the hijack has taken place, the attacker could reduce the irrigation or stop it entirely, as long as the farmer is not aware. With data being sent back that looks real, it could be a long time before the attack is discovered and it may never be detected. It may not even need to take place at the sensor end, it may be enough to hijack the reporting console or IoT hub where the data is aggregated.

With thousands of acres of land to manage, there is an increasing reliance on the screen of reported data in the office, rather than direct inspection on the ground to corroborate that the data is indeed real. Food security is a real issue and it is well within the means of nation states to execute such attacks against other countries or for the attack to be a targeted against in order to manipulate the market for Almonds.

Of course this is just an example, but the integrity of eAgriculture is an example of something which can be directly linked to a nation’s security.

In the military and intelligence domain, countries around the world are taking a more active and forthright approach to cyber security rather than just defending against attacks. This is also known as “offensive cyber”. In December 2017, AFP reported that Colonel Robert Ryan of a Hawaii based US Cybercom combat team had said the cyber troops’ mission is slightly different to the army’s traditional mission. He said: “Not everything is destroy. How can I influence by non-kinetic means? How can I reach up and create confusion and gain control?”. The New York Times had previously reported that Cybercom had been able to imitate or alter Islamic State commanders’ messages so that they directed fighters to places where they could be hit by air strikes.

Just yesterday, the UK’s Defence Chief of General Staff, Nick Carter said that the UK needs to keep up with unorthodox, hybrid warfare encompassing cyber attacks.

From the attacks that have been attributed to nation states, many of these have attacked civilian infrastructure, some of them sophisticated, others not so.

A colleague at Oxford University, Ivan Martinovic has written about the issues with air traffic control data being modified in different types of systems. Many of these systems were created many years ago and the authors of a joint paper in 2016 on “Perception and Reality in Wireless Air Traffic Communications Security” describe the threat model as “comparatively naïve”. In the same paper, they asked both Pilots and Air Traffic Controllers, their positions in hypothetical scenarios. 10.75% of the pilots surveyed said they didn’t know the effect if wrong label indications were shown up on their in-flight Traffic Collision and Avoidance Systems screens.

These systems are designed to prevent mid-air collisions. 83.3% of air traffic controllers surveyed said there would be a major loss of situational awareness if information or whole targets were selectively missing from an air traffic control radar screen. It is not difficult to imagine the impact of removing planes from either pilot or air traffic controller screens or the alternative which is to flood them with data that doesn’t exist, particularly in a busy area like Heathrow and during bad weather. The panic and loss of awareness in situation like that could cause very severe events. Other such attacks have been theorised against similar systems for ships at sea.

The future danger that we face from this kind of underlying vulnerability is that the person at the computer or controls in the future will not be a person, it will be another computer.

Disconnection from the matrix

Maybe all it takes is an attack to disconnect us from the matrix entirely.

In December 2017, a number of warnings were aired in the media and at events that the Russians may try to digitally starve the UK by cutting the fibre optic cables that connect us to the rest of the world. This is not a unique threat to us; all the countries of the world rely on the interconnectedness of the internet. Some landlocked countries are really at the mercy of some of their neighbours when it comes to internet connections and it can be used as a political weapon in the same ways that access to water and the building of dams is and has been for a number of years.

The Battle for the Atlantic in 1941 was all about cutting off supplies to Britain through U-Boat warfare.

Telecommunications cable-cutting has long been an issue in warfare but for traditionally different reasons. Barbara Tuchman’s book, The Zimmermann Telegram explains that in 1914, on the eve of the Great War, sealed orders were opened at Cable & Wireless, directing them to cut and to remove a portion of submarine cable in the English Channel. This forced the Germans to use insecure wireless for their messaging.

The situation for submarine cables today is different. Our whole economies are dependent on the resilience of cable which is a few inches wide. The original UK National Cyber Security Strategy in 2011 stated that around 6% of UK GDP was generated by the internet, but this probably does not capture the true nature of how human beings live their lives today and the downstream reliance on everything from purchasing goods and services to supply chain management, communication and transportation as well as increasing government services for welfare and taxation; the list is almost endless. Each and every one of these touch points with the internet has an onward real world effect.

The back-end of many of these systems are cloud computing services, many of which are hosted in other countries with very little domestic UK infrastructure to support going it alone. Our reliance is based on a globalised world and it is increasing; just as the world powers shift to a more parochial, protectionist approach.

The concept of “digital sovereignty” is something that governments around the world are happy to promote because it makes them feel that they’re more in control. Indeed the Russians themselves had stress-tested their own networks to check their preparedness and resilience. They failed their own tests, setting themselves on a path to hosting everything in their own country and further balkanising the internet. Russia’s foreign policy in this respect is clear. Foreign Minister Sergey Lavrov has repeatedly stated that Russia wishes to see “control” of the internet through the UN. The country has a long-held paranoia about perceived western control of the internet and wishes to redress the balance of power.

Economic data, the 24 hour news cycle and market shocks

 

Source: https://www.telegraph.co.uk/finance/markets/10013768/Bogus-AP-tweet-about-explosion-at-the-White-House-wipes-billions-off-US-markets.html

In April 2013, the Associated Press’s Twitter account caused a brief, massive stock plunge apparently amounting to 90 billion pounds when it tweeted: “Breaking: Two Explosions in the White House and Barack Obama is injured.”.

The situation was quickly dealt with and the market recovered quickly as the White House immediately clarified the situation, but this demonstrates the impact and influence that a trusted news source can have and the damage that can happen when it is hacked. After the event, AP staff admitted that there had been a targeted phishing campaign against them but it is unclear who the attacker was. Other news organisations have also been regularly targeted in similar ways.

Many automated systems rely on Twitter as a source of “sentiment” which can be linked to machine learning algorithms for stock market forecasting with sentiment analysis. As such, the sell-off in many cases may have been automated, particularly as the “news polarity” would have been negative.

An interesting paper on the subject of news headlines and their relative sentiment was published in March 2015 by the University of Minas Gerais in Brazil and the Qatar Computing Research Institute. The paper serves as a good description of how scores can be attributed to headlines of breaking news article tweets as to its news polarity.

Source: https://arxiv.org/pdf/1503.07921.pdf

As an aside, they showed that around 65% of Daily Mail articles were negative, but I guess you knew that anyway!

Source: https://arxiv.org/pdf/1503.07921.pdf

Amplification of events by stock market algorithms have been shown in other events. The 2010 “Flash Crash” caused 500 million dollars of losses in around half an hour. Part of the reason for this was the trading of derivatives – ETFs – Exchange Traded Funds which form baskets of other assets. The value of the ETFs were disconnected from the underlying value of the assets.

Navinder Singh Sarao, also known as the “Hound of Hounslow” modified automated software to enable him to make and cancel trades at a high frequency which would drive prices in a particular direction. The US Justice department claimed he made 26 million pounds over 5 years. There was another Flash Crash in August 2015 caused by such high frequency trades. By this time, “Circuit breakers” had been installed – these caused the stock market to halt over 1200 times in one day.

The recent “This is not a drill” missile alert in Hawaii which was accidentally sent by an employee was quickly reversed, but not before many people had panicked. That event happened on a Saturday and it is likely that it would have had some impact on the financial markets had it been during the week. It is not difficult to imagine the havoc that could have been caused if such a system had been compromised through a cyber attack. Indeed it is likely that that this potential has been noted by North Korea.

Source: Twitter

Machine-led attacks

In 2016, at DEFCON in Las Vegas, which is the world’s biggest hacking conference, I was a witness to history. The DARPA Grand Challenge final of the world’s first all-machine hacking tournament took place using competing teams of supercomputers. Normally this kind of tournament takes place with competing humans attacking and defending systems. The key element of this is the fact that this represented a point in time in which everything changed. For years now we have seen the evolution of attacks with some level of machine assistance, but this was a whole new ball-game. The speed at which situations changed was beyond the rate at which humans could keep up, bearing in mind that these systems had not seen any of these attacks before.

In one case, a vulnerability known as “Heartbleed” was used and exploited to attack one of the DARPA Grand Challenge supercomputers. It was found, patched and used in an attack against the other machines in a short few minutes. All of this was possible because of the automation, machine learning and artificial intelligence in-built into each computer.

There is a defence aspect to this too. In the future, we need to consider how we best utilise “the machines” for defensive purposes. The technology available to us now and in the future will enable us to use machine learning at the edge of a network to provide intelligent intrusion prevention and security monitoring systems. Our attacks will largely be machine-led, which in a crude way happens now. One can imagine agents at the edge self-healing in a different
way – healing from security attacks and then getting that fix back and out to other elements of the network.

Team Shellphish who finished in third place, even open sourced their code so that other people could develop it further.

The problem for all of us is the constraints on such AIs. If we delegate the ability to respond offensively to agents that can respond more quickly than we can, what bounds do we put in place around that? Does the adversary play by the same ruleset? What if they break the rules? Research using AI to learn different types of games has often put rules in place to make the AI follow the rules of the game, rather than cheat (for example with DeepMind), but it could be programmed to break as many rules as possible in order to succeed, for example to get information from the underlying operating system to win the game.

Then we have AI versus humans. In some respects humans have the advantage – for example, we often have the ability to disconnect – to remove power or to disengage from a computer system. The AI has physical bounds but they may also be protected from attack by its designer.

As has been seen in games that have been learned by existing, adaptable artificial intelligence, once the rules are fully understood, AIs can easily beat human players.

In September 2017, Russian President Vladimir Putin stated in a speech that “Artificial intelligence is the future, not only for Russia, but for all humankind,”. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Elon Musk, the founder of Tesla and amongst a growing number of people concerned about the rise of AI and particularly about its military uses tweeted in response “Competition for AI superiority at national level mostly likely cause of WW3 imo”.

A letter was sent to the UN in August 2017 which was signed by the founders of 116 AI and robotics companies across the world called for the ban of autonomous weapons which would “permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”. It is a sad fact that an arms race is already underway, with some autonomous weapons already deployed.

Artificial intelligence will employed in many different domains, not just military and cyber.

AI in economics

In economics, artificial intelligence offers humanity some positives in that the complexities of particular problems can be addressed by systems, but all of this relies on good and valid input data. It also relies on the political viewpoint that is programmed into such systems. Transparency in such AIs is going to be crucial when it comes to implementation of policy.

The use of computers for economic modelling has a long history. If you’re interested in this topic, the documentary maker Adam Curtis has made some excellent programmes about it, including the series Pandora’s Box. Most of these attempts have failed – the soviet, paper-based planned system ended up with ludicrous situations such as trains full of goods going all the way to Vladivostok and being turned around and sent back to Moscow. In the United States, science and data based solutions to national problems such as the risk of nuclear war, via the RAND Corporation, using Von Neumann’s Game Theory and Wohlstetter’s “Fail-Safe” strategies, but they didn’t always work. Defence Secretary Robert McNamara’s reliance on data coming back from the ground in the Vietnam war undoubtedly led to bad decisions due to the lack of quality in the data. The same is true today. Datasets are often unreliable and inaccurate but much more so if they have been deliberately altered.

Revolutions will need to take place in the way in which data is gathered before government systems will ever be accurate and reliable; and to do so will also amount to a massive breach of citizen privacy. However, it could be the case that some corporations are already sat on much of the data. In 2008 Google announced in a paper in the journal Nature that they were able to “nowcast” or predict flu outbreaks based on searches for symptoms and medical products. They were able to accurately estimate this two weeks earlier than the US Center for Disease Control. Google Flu Trends ran for a few years, but was shown to fail in a big way in 2013. Corporate systems that operate on such sets of “big data” are often opaque. This can lead to deliberate bias or inaccurate outcomes. In Google’s case the AI they use has also recently been found to be racist in picture identification.

The UK’s Office of National Statistics operates under a Code of Practice for Official Statistics, which statistics must be compliant with (and is currently under revision). This is also consistent with the UN’s Fundamental Principles of Official Statistics and the European Code of Practice on the subject. It is designed such that the statistics that are produced are honest and can’t be manipulated to suit a particular government’s views or objectives.

In the future, governments will need to ensure that artificial intelligence based systems operating on the statistics to steer policy are compliant with such codes of practice, or a new codes adopted.

What effect would the manipulation of national statistics have? If governments were making decisions based on falsified or modified data, it could have a profound financial, economic and human impact. At the moment, the safeguards encoded in the Code of Practice rely on the integrity of individuals and their impartiality and objectivity.

As we increasingly rely on computers to make those recommendations, what assurance do we have that they operate in-line expectations and that they have not been compromised? In theory, a long-term adversary could create a completely different national picture entirely by slightly skewing input data to national statistics in particular areas meaning that they bear no relation to the real world.

When algorithms go wrong

AIs will co-exist with other AIs and this could generate some disturbing results.

During Hurricane Irma in September 2017, automatic pricing algorithms of airline tickets caused prices to rise in-line with demand, resulting in exorbitant demands on evacuating residents. Such “yield management systems” are not designed to handle such circumstances and have no concept of ethics or situational awareness. Airlines were forced to manually backtrack and cap prices manually after a twitter backlash.

Source: Twitter @LeighDow

In 2011, two third-party Amazon merchants ended up in an automated pricing war over a biology book because of competing algorithms. The pricing reached over 23.5 million dollars for a book that was worth about 100 dollars. The end results of such situations where there is interaction with multiple other algorithms which may rely on that algorithmic output data may be non-deterministic and may be catastrophic where physical real world effects take place based on these failures. Humanity could be put in a position where they cannot control the pace of these combined events and widespread disruption and destruction could take place in the real world.

Source: https://www.michaeleisen.org/blog/?p=358

The human rebellion

As machines start to take over complex functions such as driving, we have already started to see humans rebel.

Artist James Bridle demonstrated this by drawing an unbroken circle within a broken one, representing a line a car could cross. This created a situation which a self-driving car could theoretically enter but couldn’t get out of – a trap. This of course was art, but researchers at a group of US universities worked out how to trick sign-recognition algorithms simply by putting stickers on stop signs.

Source: https://www.autoblog.com/2017/08/04/self-driving-car-sign-hack-stickers/

It is certainly likely that this real-world gaming of artificial intelligence is going to happen and at least in the early days of AI, it will probably be easy to do.

Nudge and Dark Patterns

Governments around the world are particular fans of “nudge techniques”; imagine if they were able to design the AI to be adaptable with nudge – or social engineering as we call it in the security world? What I see with technology today is a lot of bad nudge – manipulation of smartphone or website users into behaviours that are not beneficial to them, but are beneficial to the system provider. Such techniques are widespread and are known as “dark patterns”, a concept originally conceived by user experience expert Harry Brignull. So what is to stop these techniques being inserted into or used against artificial intelligence in order to game the user or to nudge the system in a different direction? Some of this used to be known as “opinion control” and in countries like China the government is very keen to avoid situations like the Arab Spring which would bring chaos to the country. Maintaining stability and preventing free-thinking and freedom of expression are generally the order of the day and AI will assist them in their aims.

The danger of the Digital Object Architecture

One solution that has been proposed to the problem of falsified data on the internet is to use something called the Digital Object Architecture. In theory this technology would allow for the identification of every single object and allow for full traceability and accountability. It is difficult to imagine, but a future object-based internet could make the existing internet look minuscule.

It is a technology being promoted by a specialised UN agency called the ITU. They are meeting at this moment talking about it. The main proponents are authoritarian regimes such as China, Russia and Saudi Arabia. This is because as I mentioned earlier, it brings “control” to the internet. This is clearly the polar opposite of freedom. It is an extremely pervasive technology in theory, which is also largely untested.

Shortly before Robert Mugabe’s detention in Zimbabwe, there had been issues with food shortages and concerns about the state of the currency which had triggered panic buying and a currency crash. These issues were genuine, they were not false, but it certainly wasn’t convenient for Zimbabwe or Mugabe. He reportedly announced at a bi-national meeting with South Africa that cyber technology had been abused to “undermine our economies”. The South African communications minister clarified that Mugabe intended to use the Digital Object Architecture. The Minister said “It helps us identify the bad guys. The Zimbabweans are interested in using this.”.

So if we look at the countries and fans of this technology, it seems that it is about controlling messages and about controlling truth. The Digital Object Architecture is being heavily promoted by some countries around the world that appear to both fear their own people as the way to maintain order and control and to repress free-thinking and freedom of expression. This is quite a depressing situation – the technology itself is not necessarily the problem, it is the way that it is used, who controls it and the fact that it affords no ability for privacy or anonymity for citizens.

So we must seek other solutions that do maintain the properties of confidentiality, integrity and authenticity of data in order to support widespread artificial intelligence use in economies, but technology that cannot be abused to repress human individuality and freedom. I have to mention Blockchain here. I have read a number of articles that attempt to paint blockchains as the perfect solution to some of the problems I’ve outlined here in relation to creating some kind of federated trust across sensors and IoT systems. The problems are very real, but I feel that the use of blockchains to solve some of these problems creates a number of very different,
other problems including scalability so they are by no means a panacea.

Conclusion

If we can’t trust governments around world not to abuse their own citizens, and if those same governments are pushing ahead with artificial intelligence to retain and gain greater power, how can we possibly keep AI in check?

Are we inevitably on the path to the destruction of humanity? Even if governments choose to regulate and make international agreements over the use of weaponry, there will always be states that choose to ignore that and make their artificial intelligence break the rules – it would be a battle between a completely ruthless AI and one with a metaphorical hand tied behind its back.

In an age where attribution of cyber attacks is extremely difficult, how can we develop systems that both prevent the gaming of automated systems through “false flag” techniques, masquerading as trusted friends but meddling and influencing decision making? It is entirely possible that attribution may be even more difficult due to the origins of systemic failure in economies being difficult to detect.

How do we manage the “balance of terror” (to use a nuclear age term) between nation states? What will happen when we cannot rely on anything in the world being real?

These are the problems that we need to address now as a collective global society, to retain confidence and trust in data and also the sensors and mechanisms that gather such data. There could be many possible algorithmic straws to break the camel’s back.

Artificial Intelligence is here to stay, but ethical and humane cyber security and robust engineering discipline is a part of keeping it sane. Maybe, just maybe, artificial intelligence will be used for the good of humanity and solving the world’s problems rather than for the last war we’ll ever have.

Thank you.

The future of humanity depends on us getting security right in the Internet of Things



There isn’t a day that goes by now without another Internet of Things (IoT) security story. The details are lurid, the attacks look new and the tech is well, woeful. You would be forgiven for thinking that nobody is doing anything about security and that nothing can be done, it’s all broken.

What doesn’t usually reach the press is what has been happening in the background from a defensive security perspective. Some industries have been doing security increasingly well for a long time. The mobile industry has been under constant attack since the late 1990s. As mobile technology and its uses have advanced, so has the necessity of security invention and innovation. Some really useful techniques and methods have been developed which could and should be transferred into the IoT world to help defend against known and future attacks. My own company is running an Introduction to IoT Security training course for those of you who are interested. There is of course a lot of crossover between mobile and the rest of IoT. Much of the world’s IoT communications will transit mobile networks and many mobile applications and devices will interact with IoT networks, end-point devices and hubs. The devices themselves often have chips designed by the same companies and software which is often very similar.

The Internet of Things is developing at an incredible rate and there are many competing proprietary standards in different elements of systems and in different industries. It is extremely unlikely there is going to be one winner or one unified standard – and why should there be? It is perfectly possible for connected devices to communicate using the network and equipment that is right for that solution. It is true that as the market settles down some solutions will fall by the wayside and others will consolidate, but we’re really not at that stage yet and won’t be for some time. Quite honestly, many industries are still trying to work out what is actually meant by the Internet of Things and whether it is going to be beneficial to them or not. 

What does good look like?

What we do know is what we don’t want. We have many lessons from near computing history that we ignore and neglect security at our peril. The combined efforts and experiences of technology companies that spend time defending their product security, as well as those of the security research community, so often painted as the bad guys; “the hackers” have also significantly informed what good looks like. It is down to implementers to actually listen to this advice and make sure they follow it.

We know that opening the door to reports about vulnerabilities in technology products leads to fixes which bring about overall industry improvements in security. Respect on both sides has been gained through the use of Coordinated Vulnerability Disclosure (CVD) schemes by companies and now even across whole industries.

We know that regular software updates, whilst a pain to establish and maintain are one of the best preventative and protective measures we can take against attackers, shutting the door on potential avenues for exploitation whilst closing down the window of exposure time to a point where it is worthless for an attacker to even begin the research process of creating an attack.

Industry-driven recommendations and standards on IoT security have begun to emerge in the past five years. Not only that, the various bodies are interacting with one another and acting pragmatically; where a standard exists there appears to be a willingness to endorse it and move onto areas that need fixing.

Spanning the verticals

There is a huge challenge which is particularly unique to IoT and that is the diversity of uses for the various technologies and the huge number of disparate industries they span. The car industry has its own standards bodies and has to carefully consider safety aspects, as does the healthcare industry. These industries and also the government regulatory bodies related to them all differ in their own ways. One unifying topic is security and it is now so critically important that we get it right across all industries. With every person in the world connected, the alternative of sitting back and hoping for the best is to risk the future of humanity.

Links to recommendations on IoT security

To pick some highlights – (full disclosure – I’m involved in the first two) the following bodies have created some excellent recommendations around IoT security and continue to do so:

IoT Security Foundation Best Practice Guidelines
GSMA IoT Security Guidelines
Industrial Internet Consortium 

The whole space is absolutely huge, but I should also mention the incredible work of the IETF (Internet Engineering Task Force) and 3GPP (the mobile standards body for 5G) to bring detailed bit-level standards to reality and ensure they are secure. Organisations like the NTIA (the US National Telecommunications and Information Administration), the DHS (US Department for Homeland Security) and AIOTI (The EU Alliance for Internet of Things Innovation) have all been doing a great job helping to drive leadership on different elements of th
ese topics.


I maintain a list of IoT security resources and recommendations on this post.

I could get that data quicker by carrier pigeon!

Regular readers of my tweets may have seen a couple of carrier pigeon ones. I don’t have any particular interest in carrier pigeons, but it is randomly interesting to see how data transfer can be done in other ways and how it compares with traditional online methods.

Image source: https://cuteoverload.com/2010/04/14/those-carrier-pigeons-just-get-smarter-and-smarter/

I was first inspired to think about this by South African Kevin Rolfe’s protest against slow download speeds from his company’s ISP in 2009. He flew a carrier pigeon carrying a 4GB memory stick thus beating the equivalent download.

As memory gets cheaper and smaller, it is true to say that the volume of data that can be transmitted over the average home broadband connection is not getting much better, particularly in rural areas. In some places in the UK, it is probably better to get a wireless 4G contract if coverage permits.

Anyway, back to Pigeons. The Internet Engineering Task Force (IETF) have a couple of spoof RFCs which define a standard for IP over Avian Carrier (RFC1149); the revised version adding Quality of Service too (RFC2549).

Physical Payload

Calculating the data payload is relatively easy as you can see:

Payload of a carrier pigeon. This Reddit thread says 75g. Being unscientific as we are, we’ll go with that.

  • Weight of normal sized SD: 2 grams
  • Weight of microSD: 0.4g +/- 0.1g

So basically physically we’re talking:

  • 37 full sized SD cards (with 1/2 a full sized one to spare)
  • 187 microSDs (with 1/2 a microSD to spare)

I’m not sure exactly how these would be bundled up, I’ll leave that to a Pigeon expert (which I am not).

Data Payload

This is changing on a regular basis as new SDs get released, but here are some examples:

Therefore: 187 microSDs x 2TB = 374TB Data Payload
 
Calculating Other Internet Speed Related Stats…

So let’s try and make some comparison to download speeds. I may not be right with these aspects, so please feel free to correct me in the comments and I will revise the blog.

We need to work out how fast a pigeon can go. A racing pigeon can fly up to 400 miles at an average of 92.5mph apparently. I don’t like to reference the Daily Mail but here we go. As an interesting factoid, the fastest homing pigeon is allegedly the very expensive Bolt, who sold for £300,000 at auction.

Using another reliable source (Stack Overflow), let’s work out packet time from latency and bandwidth. Using the data payload example above:

Bandwidth = Payload (374TB)

Latency = Total Time (see below)

Throughput over 400 miles (at 92.5mph)

= 4hrs, 19mins, 27.56 seconds

= 14400 + 1140 + 27.56 = 15567.56 seconds

Data / Time

= 374TB / 15567.56

Bytes / Seconds

= 3.74e+14 / 15567.56 = 24,024,317,234 bytes per second

= 24.024 GB per second

Converted to Gbps = 24.024 / 0.125 conversion (according to this site)

= 192.192 Gbps speed (woo spooky homing pigeon IP joke in here somewhere!)

Packet Loss

Whilst we’re probably not likely to drop individual data packets, we may drop the whole lot. The risk of catastrophic failure is pretty high when it comes to the pigeon. Carrier pigeons were used in hostile environments quite a lot in both world wars. 32 pigeons have been awarded the “animal VC”, the Dickin medal.

A hostile pigeon environment. Source: Wikimedia Commons: https://en.wikipedia.org/wiki/War_pigeon#/media/File:Shooting_Homing_Pigeons.png

There are some slightly safer examples. This book claims: “Out of 300 released between 53 and 73 got to Paris”. So if we take the median point of that claim (63), then we have 21% success. That is the optimistic statement. That also means we have a 79% chance of total data loss!

Summary

I didn’t consider the time that data might take to load onto a computer – I assumed instant access but obviously that wouldn’t be the case.

It isn’t likely that the world is going to
start using carrier pigeons to transmit all their data, but what it does demonstrate is a viable offline mechanism for data transfer that doesn’t involve wires or antennae. The fact that I wrote it entirely on the train whilst connected (in and out, but mostly in) is quite a nice feature of the modern world, for me anyway. However, the internet and web isn’t architected for such large latency scenarios and the offline web appears to be neglected by most of the big information companies who seemingly would rather you accessed the data when they can gather data about you. Perhaps it might be useful in the interplanetary/galactic internet/web as a catch-up mechanism – dump a large offline copy onto the next ship going up a space station or planet.

Anyway, I hope you enjoyed the read and would welcome your comments!

 

IoT Security Resources

This is a list of useful documentation and links for anyone interested in IoT security, either for building products or as general reference material. The list is alphabetical and doesn’t denote any priority. I’ll maintain this and update it as new documentation gets published. Please feel free to add links in the comments and I will add them to the list.

Privacy-specific:

Additional papers and analysis of interest:

With special thanks to Mike Horton, Mohit Sethi, Ryan Ng and those others who have contributed or have been collecting these links on other sites, including Bruce Schneier and Marin Ivezic.

Updates:

16th July 2019: Added NIST, W3C, CSDE, IOTAC, OCF and PSA Certified

01st July 2019: Added multiple CCDS, NIST, NISC, ioXt, Internet Society, ENISA, Zachary Crockett, founder and CTO of Particle, Mozilla, IRTF, IoT Security Foundation, CTIA, Bipartisan Group, Trustonic, DIN and European Union

28th August 2018: Added [GDPR] Article 29 Data Protection Working Party, multiple AIOTI links, Atlantic Council, CableLabs, CSA, Dutch Cyber Security Council, ENISA links, European Commission and AIOTI  report, IEEE, IERC, Intel, IEC, multiple IETF links, IRTF, ISOC, IoTSF, ISO/IEC JTC 1 report, Microsoft links, MIT, NTIA, CSCC, OECD links, Ofcom, OWASP, SIAA, SAFECode links, TIA, U.S. Department of Homeland Security and US Senate

3rd July 2018: Updated broken OneM2M report, GSMA IoT security assessment, AIOTI policy doc and IETF guidance links.

6th March 2018: Added NIST draft report on cybersecurity standardisation in IoT.

14th February 2018: Added IoTSI, NIST and IRTF additional links.

1st February 2018: Updated with the following organisations: ENISA, IoT Alliance Australia, ISAC, New York City, NTIA, Online Trust Alliance, OneM2M, OWASP, Smart Card Alliance, US Food & Drug Administration. Added additional papers section.

24th April 2017: Added additional IoTSF links.

5th December 2016: Added GSMA, Nominet and OLSWANG IoT privacy links as well as AIOTI security link.

24th November 2016: Added GSMA self-assessment checklist, Cloud Security Alliance research paper, Symantec paper and AT&T CEO’s guide.

Dead on Arrival? What’s next for IoT security?

IoT security is in the news again and it is pretty grim reading. The DynDNS distributed denial of service (DDoS) attack caused many major websites to go offline. Let’s be clear – there are many security companies who have suddenly dumped all the insecure webcams and routers that have been out there for years into the new world of the Internet of Things. It is semantic perhaps, but I think somewhat opportunistic because much of the kit is older and generally not your new-to-market IoT products. There is however a big issue with insecure IoT products being sold and if not today, tomorrow will bring further, much worse attacks using compromised IoT devices across the world.

We’re at the stage where we’re connecting more physical things and those things are often quite weak from a security point of view. It appears that it has only just occurred to some people that these devices can be harnessed to perform coordinated attacks on services companies and people rely on (or individuals in the case of Brian Krebs).

I fully agree with Bruce Schneier and others who have said that this is one area where government needs to step in and mandate that security needs to be baked in rather than half-baked. The market isn’t going to sort itself out any time soon, but mitigation, both technical and non-technical can be taken in the interim. This does not mean that I am expecting marks or stickers on products (they don’t work).

There are some quite straightforward measures that can be requested before a device is sold and some standards and recommendations and physical technology is available to create secure products. Some of the vulnerabilities are simply unforgivable in 2016 and the competence of these companies to be able to sell internet connected products at all has to be questioned. Those of us who are in industry often see the same companies time and time again and yet nothing ever really happens to them – they still go on selling products with horribly poor levels of security. The Mirai botnet code released in September targets connected devices such as routers and surveillance cameras because they have default passwords that have not been changed by the user / owner of the device. We all know what they are: admin, admin / admin, password and so on. https://www.routerpasswords.com/ has a good list. With Mirai, the devices are telnetted into on port 23 and hey presto, turned around for attack.

I did notice that there is an outstanding bug in the Mirai code to be resolved however, on github: “Bug: Fails to destroy the Internet #8”

Your company has to have a security mindset if you are creating a connected product. Every engineer in your organisation has to have security in mind. It is often easy to spot the companies that don’t if you know what you are looking for.

Is there another way?

At the grandly titled World Telecommunications Standardization Assembly (WTSA) starting next week in Tunisia, many countries are attempting to go further and introduce an alternative form of information management based around objects at the International Telecommunication Union (ITU) (the so-called Digital Object Architecture (DOA) technology). Some want this to be mandated for IoT. It is worth having a look at what is being proposed because we are told that the Digital Object Architecture is both secure and private. Great, surely this is what we need to help us? Yet, when we dive a bit deeper, that doesn’t seem to be the case at all. I won’t give chapter and verse here, but I’ll point to a couple of indicators:

According to information handle.net, the DOA relies on proprietary software for the handle system which resolves digital object identifiers. Version 8.1 released in 2016 has some information at: https://www.handle.net/download_hnr.html where we discover that:

• Version 8 will run on most platforms with Java 6 or higher.

A quick internet search reveals that Java 6 was released in 2006 and reveals plenty of issues. For example “Java 6 users vulnerable to zero day flaw, security experts warn” from 2013. This excerpt from the articles states “While Java 6 users remain vulnerable, the bug has been patched in Java 7. Java 6 has been retired, which means that updates are only available to paying clients.”

Another quick internet search discovers “cordra.org”. Cordra is described “as a core part of CNRI’s Digital Object Architecture”. In the technical manual from January 2016 on that site, we find information on default passwords (login: admin, password: changeit).

“Cordra – a core part of the Digital Object Architecture” – default passwords

If it looks bad, it usually is.

These things are like canaries – once you see them you end up asking more questions about what kinds of architectural security issues and vulnerabilities this software contains. What security evaluation has any of this stuff been through and who are the developers? Who has tested it at all? I’ll come back to the privacy bit at a future date.

The Digital Object Architecture is not secure.

Don’t kid yourself that the DOA is going to be any more resilient than our existing internet – the documentation also shows it is based on the same technologies we rely on for our existing internet: PKI based security, relying on encryption algorithms that have to be deprecated and replaced when it gets broken. I’m not sure how it would hold up against a DDoS attack of any sort. What this object based internet seems to give us though is a license. There are many interesting parts to it, including that it seems that CNRI can now kill the DOA at will just by terminating the license:

“Termination: This License Agreement may be terminated, at CNRI’s sole discretion, upon a material breach of its terms and conditions by Licensee.”

So would I use this for the Internet of Things?
No! I’ve touched the tip of the iceberg here. It seems fragile and flaky at best, probably non-functioning at worst. Let’s be honest – the technology has not been tested at scale, it currently has to deal with a small 100s of thousands of resolutions, rather than the billions the internet has to. I can’t imagine that it would have been able to handle “1.2 terabits per second of data“. Operating at internet scale is a whole different ball game and this is what some people just don’t get – incidentally the IETF members pointed this out to CNRI researchers back in the early 2000s on the IETF mailing lists (I will try to dig out the link at some point to add here).

Summary

Yes, we need to get better, but let’s first work together and get on the case with device security. We also need to get better at sinkholing and dropping traffic which can flood networks through various different means, including future measures such as protocol re-design. Some people have said to just block port 23 as an immediate measure (blocking telnet access). There’ll be many future attacks that really do use the Internet of Things but that doesn’t mean we have to tear up our existing internet to provide an even less secure, untested version with the DOA. The grass is not always greener on the other side.

Some more links to recommendations on IoT security can be found below:

Other bodies are also doing work on security but at an earlier stage including the W3C’s Web of Things working group

Edit: 30/10/16 – typos and added IETF list


Improving Anti-Theft Measures for Mobile Devices

I’m pleased to say that the latest version of the GSMA SG.24 Anti-Theft Device Feature Requirements has been published. Many members of the Device Security Group I chair at the GSMA have been personally committed to trying to reduce the problem of mobile theft over many years. This represents just one small part of these continued efforts.

There is no magic solution to the problem of mobile theft as I’ve discussed many times (some listed below). The pragmatic approach we’ve taken is to openly discuss this work with all the interested parties including OS vendors such as Apple, Google and Microsoft as well as to reach out to Police and government particularly in the US and the UK where the subject has been of high interest. We’ve taken their feedback and incorporated it into the work. Everyone has a part to play in reducing theft of mobile devices, not least the owner of the device itself.

Some extra resources:

Some previous blogs on mobile theft:

Introducing the work of the IoT Security Foundation

At Mobile World Congress this year, I agreed to give an interview introducing the IoT Security Foundation to Latin American audiences. If you’re interested in IoT security and our work at the Foundation, you should find this video interesting. Enjoy!

IoT Security from Rafael A. Junquera on Vimeo.

 

Improving IoT Security

I am involved in a few initiatives aimed at improving IoT security. My company wrote the original IoT security strategy for the GSMA and we have been involved ever since, culminating in the publication of a set of IoT Security Guidelines which can be used by device manufacturers through to solution providers and network operators. Here’s a short video featuring me and other industry security experts explaining what we’re doing.

There’s still a long way to go with IoT security and we’ve still got to change the “do nothing” or “it’s not our problem” mindset around big topics like safety when it comes to the cyber physical world. Each step we take along the road is one step closer to better security in IoT and these documents represent a huge leap forward.