A Code of Practice for Security in Consumer IoT Products and Services

 

Today is a good day. The UK government has launched its Secure by Design report and it marks a major step forward for the UK for Internet of Things (IoT) security.
Embedded within the report is a draft “Code of Practice for Security in Consumer IoT Products and Associated Services”, which I authored in collaboration with DCMS and with input and feedback from various parties including the ICO and the NCSC.
I have been a passionate advocate of strong product security since I worked at Panasonic and established the product security function in their mobile phone division, through to the mobile recommendations body OMTPwhere, as the mobile industry we established the basis of hardware security and trust for future devices. We’re certainly winning in the mobile space – devices are significantly harder to breach, despite being under constant attack. This isn’t because of one single thing; it is multiple aspects of security built on the experiences of previous platforms and products. As technologies have matured, we’ve been able to implement things like software updates more easily and to establish what good looks like. Other aspects such as learning how to interact with security researchers or the best architectures for separating computing processes have also been learned over time.
Carrying over product security fundamentals into IoT
 
This isn’t the case however for IoT products and services. It feels in some cases like we’re stepping back 20 years. Frustratingly for those of us who’ve been through the painful years, the solutions already exist in the mobile device world for many of the problems seen in modern, hacked IoT devices. They just haven’t been implemented in IoT. This also applies to the surrounding ecosystem of applications and services for IoT. Time and again, we’re seeing developer mistakes such as a lack of certificate validation in mobile applications for IoT, which are entirely avoidable.
There is nothing truly ground-breaking within the Code of Practice. It isn’t difficult to implement many of the measures, but what we’re saying is that enough is enough. It is time to start putting houses in order, because we just can’t tolerate bad practice any more. For too long, vendors have been shipping products which are fundamentally insecure because no attention has been paid to security design. We have a choice. We can either have a lowest common denominator approach to security or we can say “this is the bar and you must at least have these basics in place”. In 2018 it just simply isn’t acceptable to have things like default passwords and open ports. This is how stuff like Mirai happens. The guidance addresses those issues and had it been in place, the huge impact of Mirai would simply not have occurred. Now is the time to act before the situation gets worse and people get physically hurt. The prioritisation of the guidance was something we discussed at length. The top three of elimination of the practice of default passwords, providing security researchers with a way to disclose vulnerabilities and keeping software updated were based on the fact that addressing these elements in particular, as a priority, will have a huge beneficial impact on overall cyber security, creating a much more secure environment for consumers.
We’re not alone in saying this. Multiple governments and organisations around the world are concerned about IoT security and are publishing security recommendations to help. This includes the US’s NIST, Europe’s ENISA and organisations such as the GSMA and the IoT Security Foundation. I maintain a living list of IoT security guidance from around the world on this blog.
So in order to make things more secure and ultimately safer (because a lot of IoT is already potentially life-impacting), it’s time to step things up and get better. Many parts of the IoT supply chain are already doing a huge amount on security and for those organisations, they’re likely already meeting the guidance in the code of practice, but it is evident that a large number of products are failing even on the basics.
Insecurity Canaries
Measuring security is always difficult. This is why we decided to create an outcomes-based approach. What we want is the ability for retailers and other parts of the supply chain to be easily able to identify what bad looks like. For some of the basic things like eliminating default passwords or setting up ways for security researchers to contact in the case of vulnerabilities, these can probably be seen as insecurity canaries – if the basics aren’t in place, what about the more complex elements that are more difficult to see or to inspect?
Another reason to focus on outcomes was that we were very keen to avoid stifling creativity when it came to security solutions, so we’ve avoided being prescriptive other than to describe best practice approaches or where bad practices need to be eliminated.
The Future

I am looking forward to developing the work further based on the feedback from the informal consultation on the Code of Practice. I support the various standards and recommendations mapping exercises going on which will fundamentally make compliance a lot easier for companies around the world. I am proud to have worked with such a forward-thinking team on this project and look forward to contributing further in the future.

Additional Resources

I’ve also written about how the Code of Practice would have prevented major attacks on IoT:

Need to know where to go to find out about IoT security recommendations and standards?

Here’s a couple more things I’ve written on the subject of IoT security:

Mobile World Congress 2018

View of the new Fira when flying in to Barcelona

This week it is Mobile World Congress, the biggest event in the mobile industry calendar. If you’re interested in meeting for a chat or just hearing about mobile and IoT security & privacy, I’ll be at the following places!

Sunday 25th February
6th GSMA IoT Summit
13:00-17:30
NH Collection Barcelona Tower Hotel

Copper Horse annual security dinner at a secret location in Barcelona
21:00-late (tweet me or message if you want to come along)

Monday 26th February
4YFN – “Hidden Threats and Opportunities to my Business”
Panelist: “Spotlight – How Data and Cyber Security can make or break a new business?”
16:15-17:15
4YFN (at the old Fira), Fira Barcelona Montjuïc, Av. Reina Maria Cristina

Tuesday 27th February
IoT Tuesday, hosted by Cellusys, supported by JT Group and the IoT Security Foundation
17:00-late Cellusys event – I’ll be giving an opening talk on behalf of the IoT Security Foundation, which will be: “The Ticking Clock”: why security in IoT is critical to how you run your business.
Tweet me if you want to attend.

Wednesday 28th February
16:30-17:30
Why Should we Trust your Digital Security?
Me having a fireside chat in this session with Jean Gonie, VEON: Data, Consumer Protection and the GDPR
Auditorium 3, Hall 4 (on-site at MWC)

I’ll be at a few other events and will generally be around and about the MWC main site all week so please feel free to get in contact. Speaking of Barcelona, we’re holding our next training, “Foundations of IoT Security” in May in the city. More details and sign-up can be found on the IoTSF website.

 

The Real World is Not Real: Manipulation of Data and People in the Future Internet

On the 24th of January 2018 I gave my professorial lecture at York St John University in the UK. I decided to choose the subject of fakeness: the manipulation of data by attackers and its potential impact on people and machines if trusted. The transcript is below. I’ve added in additional links so you can see sources and informative links for further reading. I hope you enjoy the read, feel free to leave comments or tweet me.

 

The Real World is Not Real: Manipulation of Data and People in the Future Internet
David Rogers

Introduction

Thank you all for coming today to my inaugural lecture. I am honoured and humbled both to be in this position and that you are here today to listen. I’m also particularly grateful to Dr. Justin McKeown for engaging me in the first place with such a great university and for our Vice Chancellor Karen Stanton’s support.

We have been on a steady path towards more advanced artificial intelligence for a number of years now. Bots in the software sense have been around for a long time. They’ve been used in everything from online gaming to share dealing. The AI in computer chess games has been easily able to beat most users for many years. We still however have a long way to go towards full sentience and we don’t even fully understand what that is yet.

In the past couple of years we have seen the widespread use of both automation and rudimentary AIs in order to manipulate people, particularly it seems in elections.

Manipulation of people

I hate to use the term fake news, but it has taken its place in world parlance. The United States Senate is currently investigating the use of trolls by Russia in order to manipulate the course of the 2016 presidential election. Through the use of Twitter accounts and Facebook advertising, a concerted attempt was made to influence opinion.

Source: https://www.huffingtonpost.co.uk/entry/russian-trolls-fake-news_us_58dde6bae4b08194e3b8d5c4

Investigative journalist Carole Cadwalladar published a report in May 2017 entitled “The great British Brexit robbery: how our democracy was hijacked” which is an interesting read. While I should note that the report is the subject of legal complaints by Cambridge Analytica, there are certainly interesting questions to be asked as to why so much of the Leave campaigns’ money was ploughed into online targeting. The ICO is currently investigating how voters’ personal data is being captured and used in political campaigns.

Whatever the case, it seems that humans can easily be influenced and there are many academic papers on this subject.

It also seems that some technology companies have become unwittingly duped into and have actually profited from undermining free and fair elections by the manipulation of their targeted advertising at certain segments of populations. This represents a significant threat to democracy and is still ongoing right now where elections are taking place.

Fake news is not a new thing. In one example from the 2nd world war, the Belgian resistance created “Le Faux Soir” to replace the newspaper “Le Soir” and distributed slightly before the normal deliveries arrived. The faux Soir had lots of stories saying how badly the war was going for the Germans and nearly all of the copies were taken before anyone realised. Le Soir’s modern version has been attacked since then; by IS supporters in 2015 although like many hacks on media sites, it was more a form of defacement.

Source: https://jonmilitaria44.skyrock.com/2815465118-Le-FAUX-Soir.html

What I’m really interested in at the moment is the next stage of all of this; the manipulation of data and therefore machines that use or rely on it.

Disconnecting real world objects and modification of real world things

The Internet of Things or IoT is something that I take a particular interest in and I have spent a lot of time looking at different aspects of security. We’ve had lots of interesting attacks on IoT and the so-called cyber-physical world on everything from children’s toys to power plants.

Whilst some attacks are entirely logical and we generally threat model for them, the increasing connectedness of the world has meant that “lone wolf” attackers have the potential to amplify their attacks in such a way that they could not before. Some people used to ask me in talks about why people would want to do things like attack remote insulin pumps or pacemakers. I would respond with the question, “why do people put glass in baby food?”

The tampering and contamination of foods has happened on a regular basis. The psychology behind such attacks is surely one of the powerless gaining power; in those cases often causing a product recall, financial harm and embarrassment to a disgruntled employee’s company or, in the biggest case in the UK in 1989, to extort money from the manufacturer.

In the Internet of Things, you often hear about these types of threats in terms of catastrophic impact, for example “stopping all the cars on a motorway”, “all the diabetics in the US”, “anyone with a pacemaker” or “poisoning water supplies”. Threats are made up of a combination of Intent, Capability and Opportunity. Without any one of the three, an attack is unlikely to be successful. One side effect of the IoT is that it is giving a new breed of attackers opportunity.

Connecting my house to the internet gives someone in another country the theoretical ability to
watch webcams in my house – if you don’t believe me, have a look at a tool called Shodan (the capability) and search for webcams (your intent).

In January 2018, Munir Mohammed and his pharmacist partner were convicted of planning a terrorist attack. Of some concern, it was also reported that he had been researching Ricin and he had a job making sauces for ready meals for Tesco and Morrisons in a food factory. The media made a strong case that there was a link between his work there although there was no evidence to that effect. It does raise a frightening prospect however; the potential for an intent of a different sort – not the traditional type of disgruntled employee wishing to taint food and scare parents, but a motivation caused by radicalisation. An attacker of this nature is eminently more dangerous than the disgruntled employee.

The employee intends to harm the employer rather than a child (in most cases), whereas the terrorist actually intends to kill the people who consume the product through pure hatred (in the case of ISIS). It is therefore entirely conceivable that this could be a high impact, but low likelihood risk that needs to be considered in the Internet of Things.

There have been incidents pointing towards potentially catastrophic future events. In 2016, the US mobile network operator Verizon reported that a hacktivist group linked to Syria managed to change valves which controlled the levels of chemicals in tap water, releasing dangerous levels of the chemical into the water supply. The scary part of this is that anyone with an IP connection, in any part of the world, with the intent, capability and opportunity could theoretically execute this on a number of connected water treatment systems. So it becomes a different numbers game – the number of potential attackers rises dramatically (many with different motivations), as does the amount of system exposure.

Many of these industrial systems run on very old equipment which is not segregated and can easily be reached via the internet. There are other examples too – in 2014, the German Federal Office of Information Security published details of a hack which caused serious damage to a blast furnace in a steel mill, after a worker was compromised via a phishing email.

What I’m trying to say here is that people with ill intentions whether it be nation states or terrorists could and are in some cases attempting to attack real world infrastructure using our connectedness and technology to achieve their aims.

Inventing the real world

Most of the attacks we have talked about are targeting and modification of real world things via the internet – from the inside out. But what about from the outside in? What if the real world wasn’t real, as presented to humans and other systems?

A couple of weeks ago, Oobah Butler, posted his article on how he managed to make his fake restaurant “The Shed At Dulwich” the most popular restaurant in London. He had the idea after he had previously been paid £10 for each positive comment he gave real restaurants, transforming their fortunes. Eventually Oobah decided to open the shed for real for one night and ended up hiring in actors and serving up microwave meals to the guests – some of who tried to book again! The restaurant has been deleted from Tripadvisor now, but I managed to get a cached version of it as you can see.

The Shed at Dulwich (Google Cache)

Fake reviews are of course extremely common on the internet. A whole industry of “click farms” in low income countries has grown up to both generate clicks to generate advertising revenue and to provide 5 star reviews and comments of products, applications and services over the internet. These fake clicks have been provided by software “bots” and through virtual machine emulations of smartphones. As detection gets better, humans are engaged in this activity including providing text comments on blogs and reviews.

Source: https://www.mirror.co.uk/news/world-news/bizarre-click-farm-10000-phones-10419403

Major retail companies are employing AI “chatbots” to respond to Twitter or Facebook complaints, with customers engaging and becoming enraged by their responses when they get it wrong, not realising that they’re not talking to a human being. In the case of Microsoft’s machine learning chatbot Tay, it made racist and nazi comments in less than a day based on things that were said to it and the dataset it was using for training.

You can see that the impression of a real world is already being significantly harmed by automation controlled by people with less than fair intentions. But what about truly trying to fake real world data in order to make humans take action, or not take action when they need to?

Faking real world data

I will give you a simple example which I show to some of my IoT security students. There is a popular TV and movie trope about spoofing the information from CCTV cameras.

Tvtropes.org talks about two basic examples – “the polaroid punk” – where a picture is taken and put over the camera while the hero goes about his business (you may remember this from the Vatican scene in “Mission Impossible 3”) and the second being “the splice and dice” – looping camera footage (which you may remember from the film “Speed”).

In my version of this trope, instead of doing this with CCTV imagery, we use the same trope against IoT data going back to a console. The specific example I chose was Connected Agriculture. There are some very expensive crops such as Almonds or Pistachios that will lose their crop if starved of water and could potentially kill the trees. It can take between 5 and 12 years for new trees to produce so ir
rigation has to be managed carefully.

As farms increasingly rely on the Internet of Things to provide accurate sensing and measurement, as well as control, it is entirely conceivable that this could be attacked for a variety of reasons. An attack could involve quietly monitoring the data being sent by sensors back to the farm, then stealthily taking over that reporting with fake data. Once the hijack has taken place, the attacker could reduce the irrigation or stop it entirely, as long as the farmer is not aware. With data being sent back that looks real, it could be a long time before the attack is discovered and it may never be detected. It may not even need to take place at the sensor end, it may be enough to hijack the reporting console or IoT hub where the data is aggregated.

With thousands of acres of land to manage, there is an increasing reliance on the screen of reported data in the office, rather than direct inspection on the ground to corroborate that the data is indeed real. Food security is a real issue and it is well within the means of nation states to execute such attacks against other countries or for the attack to be a targeted against in order to manipulate the market for Almonds.

Of course this is just an example, but the integrity of eAgriculture is an example of something which can be directly linked to a nation’s security.

In the military and intelligence domain, countries around the world are taking a more active and forthright approach to cyber security rather than just defending against attacks. This is also known as “offensive cyber”. In December 2017, AFP reported that Colonel Robert Ryan of a Hawaii based US Cybercom combat team had said the cyber troops’ mission is slightly different to the army’s traditional mission. He said: “Not everything is destroy. How can I influence by non-kinetic means? How can I reach up and create confusion and gain control?”. The New York Times had previously reported that Cybercom had been able to imitate or alter Islamic State commanders’ messages so that they directed fighters to places where they could be hit by air strikes.

Just yesterday, the UK’s Defence Chief of General Staff, Nick Carter said that the UK needs to keep up with unorthodox, hybrid warfare encompassing cyber attacks.

From the attacks that have been attributed to nation states, many of these have attacked civilian infrastructure, some of them sophisticated, others not so.

A colleague at Oxford University, Ivan Martinovic has written about the issues with air traffic control data being modified in different types of systems. Many of these systems were created many years ago and the authors of a joint paper in 2016 on “Perception and Reality in Wireless Air Traffic Communications Security” describe the threat model as “comparatively naïve”. In the same paper, they asked both Pilots and Air Traffic Controllers, their positions in hypothetical scenarios. 10.75% of the pilots surveyed said they didn’t know the effect if wrong label indications were shown up on their in-flight Traffic Collision and Avoidance Systems screens.

These systems are designed to prevent mid-air collisions. 83.3% of air traffic controllers surveyed said there would be a major loss of situational awareness if information or whole targets were selectively missing from an air traffic control radar screen. It is not difficult to imagine the impact of removing planes from either pilot or air traffic controller screens or the alternative which is to flood them with data that doesn’t exist, particularly in a busy area like Heathrow and during bad weather. The panic and loss of awareness in situation like that could cause very severe events. Other such attacks have been theorised against similar systems for ships at sea.

The future danger that we face from this kind of underlying vulnerability is that the person at the computer or controls in the future will not be a person, it will be another computer.

Disconnection from the matrix

Maybe all it takes is an attack to disconnect us from the matrix entirely.

In December 2017, a number of warnings were aired in the media and at events that the Russians may try to digitally starve the UK by cutting the fibre optic cables that connect us to the rest of the world. This is not a unique threat to us; all the countries of the world rely on the interconnectedness of the internet. Some landlocked countries are really at the mercy of some of their neighbours when it comes to internet connections and it can be used as a political weapon in the same ways that access to water and the building of dams is and has been for a number of years.

The Battle for the Atlantic in 1941 was all about cutting off supplies to Britain through U-Boat warfare.

Telecommunications cable-cutting has long been an issue in warfare but for traditionally different reasons. Barbara Tuchman’s book, The Zimmermann Telegram explains that in 1914, on the eve of the Great War, sealed orders were opened at Cable & Wireless, directing them to cut and to remove a portion of submarine cable in the English Channel. This forced the Germans to use insecure wireless for their messaging.

The situation for submarine cables today is different. Our whole economies are dependent on the resilience of cable which is a few inches wide. The original UK National Cyber Security Strategy in 2011 stated that around 6% of UK GDP was generated by the internet, but this probably does not capture the true nature of how human beings live their lives today and the downstream reliance on everything from purchasing goods and services to supply chain management, communication and transportation as well as increasing government services for welfare and taxation; the list is almost endless. Each and every one of these touch points with the internet has an onward real world effect.

The back-end of many of these systems are cloud computing services, many of which are hosted in other countries with very little domestic UK infrastructure to support going it alone. Our reliance is based on a globalised world and it is increasing; just as the world powers shift to a more parochial, protectionist approach.

The concept of “digital sovereignty” is something that governments around the world are happy to promote because it makes them feel that they’re more in control. Indeed the Russians themselves had stress-tested their own networks to check their preparedness and resilience. They failed their own tests, setting themselves on a path to hosting everything in their own country and further balkanising the internet. Russia’s foreign policy in this respect is clear. Foreign Minister Sergey Lavrov has repeatedly stated that Russia wishes to see “control” of the internet through the UN. The country has a long-held paranoia about perceived western control of the internet and wishes to redress the balance of power.

Economic data, the 24 hour news cycle and market shocks

 

Source: https://www.telegraph.co.uk/finance/markets/10013768/Bogus-AP-tweet-about-explosion-at-the-White-House-wipes-billions-off-US-markets.html

In April 2013, the Associated Press’s Twitter account caused a brief, massive stock plunge apparently amounting to 90 billion pounds when it tweeted: “Breaking: Two Explosions in the White House and Barack Obama is injured.”.

The situation was quickly dealt with and the market recovered quickly as the White House immediately clarified the situation, but this demonstrates the impact and influence that a trusted news source can have and the damage that can happen when it is hacked. After the event, AP staff admitted that there had been a targeted phishing campaign against them but it is unclear who the attacker was. Other news organisations have also been regularly targeted in similar ways.

Many automated systems rely on Twitter as a source of “sentiment” which can be linked to machine learning algorithms for stock market forecasting with sentiment analysis. As such, the sell-off in many cases may have been automated, particularly as the “news polarity” would have been negative.

An interesting paper on the subject of news headlines and their relative sentiment was published in March 2015 by the University of Minas Gerais in Brazil and the Qatar Computing Research Institute. The paper serves as a good description of how scores can be attributed to headlines of breaking news article tweets as to its news polarity.

Source: https://arxiv.org/pdf/1503.07921.pdf

As an aside, they showed that around 65% of Daily Mail articles were negative, but I guess you knew that anyway!

Source: https://arxiv.org/pdf/1503.07921.pdf

Amplification of events by stock market algorithms have been shown in other events. The 2010 “Flash Crash” caused 500 million dollars of losses in around half an hour. Part of the reason for this was the trading of derivatives – ETFs – Exchange Traded Funds which form baskets of other assets. The value of the ETFs were disconnected from the underlying value of the assets.

Navinder Singh Sarao, also known as the “Hound of Hounslow” modified automated software to enable him to make and cancel trades at a high frequency which would drive prices in a particular direction. The US Justice department claimed he made 26 million pounds over 5 years. There was another Flash Crash in August 2015 caused by such high frequency trades. By this time, “Circuit breakers” had been installed – these caused the stock market to halt over 1200 times in one day.

The recent “This is not a drill” missile alert in Hawaii which was accidentally sent by an employee was quickly reversed, but not before many people had panicked. That event happened on a Saturday and it is likely that it would have had some impact on the financial markets had it been during the week. It is not difficult to imagine the havoc that could have been caused if such a system had been compromised through a cyber attack. Indeed it is likely that that this potential has been noted by North Korea.

Source: Twitter

Machine-led attacks

In 2016, at DEFCON in Las Vegas, which is the world’s biggest hacking conference, I was a witness to history. The DARPA Grand Challenge final of the world’s first all-machine hacking tournament took place using competing teams of supercomputers. Normally this kind of tournament takes place with competing humans attacking and defending systems. The key element of this is the fact that this represented a point in time in which everything changed. For years now we have seen the evolution of attacks with some level of machine assistance, but this was a whole new ball-game. The speed at which situations changed was beyond the rate at which humans could keep up, bearing in mind that these systems had not seen any of these attacks before.

In one case, a vulnerability known as “Heartbleed” was used and exploited to attack one of the DARPA Grand Challenge supercomputers. It was found, patched and used in an attack against the other machines in a short few minutes. All of this was possible because of the automation, machine learning and artificial intelligence in-built into each computer.

There is a defence aspect to this too. In the future, we need to consider how we best utilise “the machines” for defensive purposes. The technology available to us now and in the future will enable us to use machine learning at the edge of a network to provide intelligent intrusion prevention and security monitoring systems. Our attacks will largely be machine-led, which in a crude way happens now. One can imagine agents at the edge self-healing in a different
way – healing from security attacks and then getting that fix back and out to other elements of the network.

Team Shellphish who finished in third place, even open sourced their code so that other people could develop it further.

The problem for all of us is the constraints on such AIs. If we delegate the ability to respond offensively to agents that can respond more quickly than we can, what bounds do we put in place around that? Does the adversary play by the same ruleset? What if they break the rules? Research using AI to learn different types of games has often put rules in place to make the AI follow the rules of the game, rather than cheat (for example with DeepMind), but it could be programmed to break as many rules as possible in order to succeed, for example to get information from the underlying operating system to win the game.

Then we have AI versus humans. In some respects humans have the advantage – for example, we often have the ability to disconnect – to remove power or to disengage from a computer system. The AI has physical bounds but they may also be protected from attack by its designer.

As has been seen in games that have been learned by existing, adaptable artificial intelligence, once the rules are fully understood, AIs can easily beat human players.

In September 2017, Russian President Vladimir Putin stated in a speech that “Artificial intelligence is the future, not only for Russia, but for all humankind,”. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Elon Musk, the founder of Tesla and amongst a growing number of people concerned about the rise of AI and particularly about its military uses tweeted in response “Competition for AI superiority at national level mostly likely cause of WW3 imo”.

A letter was sent to the UN in August 2017 which was signed by the founders of 116 AI and robotics companies across the world called for the ban of autonomous weapons which would “permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”. It is a sad fact that an arms race is already underway, with some autonomous weapons already deployed.

Artificial intelligence will employed in many different domains, not just military and cyber.

AI in economics

In economics, artificial intelligence offers humanity some positives in that the complexities of particular problems can be addressed by systems, but all of this relies on good and valid input data. It also relies on the political viewpoint that is programmed into such systems. Transparency in such AIs is going to be crucial when it comes to implementation of policy.

The use of computers for economic modelling has a long history. If you’re interested in this topic, the documentary maker Adam Curtis has made some excellent programmes about it, including the series Pandora’s Box. Most of these attempts have failed – the soviet, paper-based planned system ended up with ludicrous situations such as trains full of goods going all the way to Vladivostok and being turned around and sent back to Moscow. In the United States, science and data based solutions to national problems such as the risk of nuclear war, via the RAND Corporation, using Von Neumann’s Game Theory and Wohlstetter’s “Fail-Safe” strategies, but they didn’t always work. Defence Secretary Robert McNamara’s reliance on data coming back from the ground in the Vietnam war undoubtedly led to bad decisions due to the lack of quality in the data. The same is true today. Datasets are often unreliable and inaccurate but much more so if they have been deliberately altered.

Revolutions will need to take place in the way in which data is gathered before government systems will ever be accurate and reliable; and to do so will also amount to a massive breach of citizen privacy. However, it could be the case that some corporations are already sat on much of the data. In 2008 Google announced in a paper in the journal Nature that they were able to “nowcast” or predict flu outbreaks based on searches for symptoms and medical products. They were able to accurately estimate this two weeks earlier than the US Center for Disease Control. Google Flu Trends ran for a few years, but was shown to fail in a big way in 2013. Corporate systems that operate on such sets of “big data” are often opaque. This can lead to deliberate bias or inaccurate outcomes. In Google’s case the AI they use has also recently been found to be racist in picture identification.

The UK’s Office of National Statistics operates under a Code of Practice for Official Statistics, which statistics must be compliant with (and is currently under revision). This is also consistent with the UN’s Fundamental Principles of Official Statistics and the European Code of Practice on the subject. It is designed such that the statistics that are produced are honest and can’t be manipulated to suit a particular government’s views or objectives.

In the future, governments will need to ensure that artificial intelligence based systems operating on the statistics to steer policy are compliant with such codes of practice, or a new codes adopted.

What effect would the manipulation of national statistics have? If governments were making decisions based on falsified or modified data, it could have a profound financial, economic and human impact. At the moment, the safeguards encoded in the Code of Practice rely on the integrity of individuals and their impartiality and objectivity.

As we increasingly rely on computers to make those recommendations, what assurance do we have that they operate in-line expectations and that they have not been compromised? In theory, a long-term adversary could create a completely different national picture entirely by slightly skewing input data to national statistics in particular areas meaning that they bear no relation to the real world.

When algorithms go wrong

AIs will co-exist with other AIs and this could generate some disturbing results.

During Hurricane Irma in September 2017, automatic pricing algorithms of airline tickets caused prices to rise in-line with demand, resulting in exorbitant demands on evacuating residents. Such “yield management systems” are not designed to handle such circumstances and have no concept of ethics or situational awareness. Airlines were forced to manually backtrack and cap prices manually after a twitter backlash.

Source: Twitter @LeighDow

In 2011, two third-party Amazon merchants ended up in an automated pricing war over a biology book because of competing algorithms. The pricing reached over 23.5 million dollars for a book that was worth about 100 dollars. The end results of such situations where there is interaction with multiple other algorithms which may rely on that algorithmic output data may be non-deterministic and may be catastrophic where physical real world effects take place based on these failures. Humanity could be put in a position where they cannot control the pace of these combined events and widespread disruption and destruction could take place in the real world.

Source: https://www.michaeleisen.org/blog/?p=358

The human rebellion

As machines start to take over complex functions such as driving, we have already started to see humans rebel.

Artist James Bridle demonstrated this by drawing an unbroken circle within a broken one, representing a line a car could cross. This created a situation which a self-driving car could theoretically enter but couldn’t get out of – a trap. This of course was art, but researchers at a group of US universities worked out how to trick sign-recognition algorithms simply by putting stickers on stop signs.

Source: https://www.autoblog.com/2017/08/04/self-driving-car-sign-hack-stickers/

It is certainly likely that this real-world gaming of artificial intelligence is going to happen and at least in the early days of AI, it will probably be easy to do.

Nudge and Dark Patterns

Governments around the world are particular fans of “nudge techniques”; imagine if they were able to design the AI to be adaptable with nudge – or social engineering as we call it in the security world? What I see with technology today is a lot of bad nudge – manipulation of smartphone or website users into behaviours that are not beneficial to them, but are beneficial to the system provider. Such techniques are widespread and are known as “dark patterns”, a concept originally conceived by user experience expert Harry Brignull. So what is to stop these techniques being inserted into or used against artificial intelligence in order to game the user or to nudge the system in a different direction? Some of this used to be known as “opinion control” and in countries like China the government is very keen to avoid situations like the Arab Spring which would bring chaos to the country. Maintaining stability and preventing free-thinking and freedom of expression are generally the order of the day and AI will assist them in their aims.

The danger of the Digital Object Architecture

One solution that has been proposed to the problem of falsified data on the internet is to use something called the Digital Object Architecture. In theory this technology would allow for the identification of every single object and allow for full traceability and accountability. It is difficult to imagine, but a future object-based internet could make the existing internet look minuscule.

It is a technology being promoted by a specialised UN agency called the ITU. They are meeting at this moment talking about it. The main proponents are authoritarian regimes such as China, Russia and Saudi Arabia. This is because as I mentioned earlier, it brings “control” to the internet. This is clearly the polar opposite of freedom. It is an extremely pervasive technology in theory, which is also largely untested.

Shortly before Robert Mugabe’s detention in Zimbabwe, there had been issues with food shortages and concerns about the state of the currency which had triggered panic buying and a currency crash. These issues were genuine, they were not false, but it certainly wasn’t convenient for Zimbabwe or Mugabe. He reportedly announced at a bi-national meeting with South Africa that cyber technology had been abused to “undermine our economies”. The South African communications minister clarified that Mugabe intended to use the Digital Object Architecture. The Minister said “It helps us identify the bad guys. The Zimbabweans are interested in using this.”.

So if we look at the countries and fans of this technology, it seems that it is about controlling messages and about controlling truth. The Digital Object Architecture is being heavily promoted by some countries around the world that appear to both fear their own people as the way to maintain order and control and to repress free-thinking and freedom of expression. This is quite a depressing situation – the technology itself is not necessarily the problem, it is the way that it is used, who controls it and the fact that it affords no ability for privacy or anonymity for citizens.

So we must seek other solutions that do maintain the properties of confidentiality, integrity and authenticity of data in order to support widespread artificial intelligence use in economies, but technology that cannot be abused to repress human individuality and freedom. I have to mention Blockchain here. I have read a number of articles that attempt to paint blockchains as the perfect solution to some of the problems I’ve outlined here in relation to creating some kind of federated trust across sensors and IoT systems. The problems are very real, but I feel that the use of blockchains to solve some of these problems creates a number of very different,
other problems including scalability so they are by no means a panacea.

Conclusion

If we can’t trust governments around world not to abuse their own citizens, and if those same governments are pushing ahead with artificial intelligence to retain and gain greater power, how can we possibly keep AI in check?

Are we inevitably on the path to the destruction of humanity? Even if governments choose to regulate and make international agreements over the use of weaponry, there will always be states that choose to ignore that and make their artificial intelligence break the rules – it would be a battle between a completely ruthless AI and one with a metaphorical hand tied behind its back.

In an age where attribution of cyber attacks is extremely difficult, how can we develop systems that both prevent the gaming of automated systems through “false flag” techniques, masquerading as trusted friends but meddling and influencing decision making? It is entirely possible that attribution may be even more difficult due to the origins of systemic failure in economies being difficult to detect.

How do we manage the “balance of terror” (to use a nuclear age term) between nation states? What will happen when we cannot rely on anything in the world being real?

These are the problems that we need to address now as a collective global society, to retain confidence and trust in data and also the sensors and mechanisms that gather such data. There could be many possible algorithmic straws to break the camel’s back.

Artificial Intelligence is here to stay, but ethical and humane cyber security and robust engineering discipline is a part of keeping it sane. Maybe, just maybe, artificial intelligence will be used for the good of humanity and solving the world’s problems rather than for the last war we’ll ever have.

Thank you.

The future of humanity depends on us getting security right in the Internet of Things



There isn’t a day that goes by now without another Internet of Things (IoT) security story. The details are lurid, the attacks look new and the tech is well, woeful. You would be forgiven for thinking that nobody is doing anything about security and that nothing can be done, it’s all broken.

What doesn’t usually reach the press is what has been happening in the background from a defensive security perspective. Some industries have been doing security increasingly well for a long time. The mobile industry has been under constant attack since the late 1990s. As mobile technology and its uses have advanced, so has the necessity of security invention and innovation. Some really useful techniques and methods have been developed which could and should be transferred into the IoT world to help defend against known and future attacks. My own company is running an Introduction to IoT Security training course for those of you who are interested. There is of course a lot of crossover between mobile and the rest of IoT. Much of the world’s IoT communications will transit mobile networks and many mobile applications and devices will interact with IoT networks, end-point devices and hubs. The devices themselves often have chips designed by the same companies and software which is often very similar.

The Internet of Things is developing at an incredible rate and there are many competing proprietary standards in different elements of systems and in different industries. It is extremely unlikely there is going to be one winner or one unified standard – and why should there be? It is perfectly possible for connected devices to communicate using the network and equipment that is right for that solution. It is true that as the market settles down some solutions will fall by the wayside and others will consolidate, but we’re really not at that stage yet and won’t be for some time. Quite honestly, many industries are still trying to work out what is actually meant by the Internet of Things and whether it is going to be beneficial to them or not. 

What does good look like?

What we do know is what we don’t want. We have many lessons from near computing history that we ignore and neglect security at our peril. The combined efforts and experiences of technology companies that spend time defending their product security, as well as those of the security research community, so often painted as the bad guys; “the hackers” have also significantly informed what good looks like. It is down to implementers to actually listen to this advice and make sure they follow it.

We know that opening the door to reports about vulnerabilities in technology products leads to fixes which bring about overall industry improvements in security. Respect on both sides has been gained through the use of Coordinated Vulnerability Disclosure (CVD) schemes by companies and now even across whole industries.

We know that regular software updates, whilst a pain to establish and maintain are one of the best preventative and protective measures we can take against attackers, shutting the door on potential avenues for exploitation whilst closing down the window of exposure time to a point where it is worthless for an attacker to even begin the research process of creating an attack.

Industry-driven recommendations and standards on IoT security have begun to emerge in the past five years. Not only that, the various bodies are interacting with one another and acting pragmatically; where a standard exists there appears to be a willingness to endorse it and move onto areas that need fixing.

Spanning the verticals

There is a huge challenge which is particularly unique to IoT and that is the diversity of uses for the various technologies and the huge number of disparate industries they span. The car industry has its own standards bodies and has to carefully consider safety aspects, as does the healthcare industry. These industries and also the government regulatory bodies related to them all differ in their own ways. One unifying topic is security and it is now so critically important that we get it right across all industries. With every person in the world connected, the alternative of sitting back and hoping for the best is to risk the future of humanity.

Links to recommendations on IoT security

To pick some highlights – (full disclosure – I’m involved in the first two) the following bodies have created some excellent recommendations around IoT security and continue to do so:

IoT Security Foundation Best Practice Guidelines
GSMA IoT Security Guidelines
Industrial Internet Consortium 

The whole space is absolutely huge, but I should also mention the incredible work of the IETF (Internet Engineering Task Force) and 3GPP (the mobile standards body for 5G) to bring detailed bit-level standards to reality and ensure they are secure. Organisations like the NTIA (the US National Telecommunications and Information Administration), the DHS (US Department for Homeland Security) and AIOTI (The EU Alliance for Internet of Things Innovation) have all been doing a great job helping to drive leadership on different elements of th
ese topics.


I maintain a list of IoT security resources and recommendations on this post.

IoT Security Resources

This is a list of useful documentation and links for anyone interested in IoT security, either for building products or as general reference material. The list is alphabetical and doesn’t denote any priority. I’ll maintain this and update it as new documentation gets published. Please feel free to add links in the comments and I will add them to the list.


Privacy-specific:


Additional papers and analysis of interest:

With special thanks to Mike Horton, Mohit Sethi, Ryan Ng and those others who have contributed or have been collecting these links on other sites, including Bruce Schneier and Marin Ivezic.


Updates:
28th August 2018: Added [GDPR] Article 29 Data Protection Working Party, multiple AIOTI links, Atlantic Council, CableLabs, CSA, Dutch Cyber Security Council, ENISA links, European Commission and AIOTI  report, IEEE, IERC, Intel, IEC, multiple IETF links, IRTF, ISOC, IoTSF, ISO/IEC JTC 1 report, Microsoft links, MIT, NTIA, CSCC, OECD links, Ofcom, OWASP, SIAA, SAFECode links, TIA, U.S. Department of Homeland Security and US Senate
3rd July 2018: Updated broken OneM2M report, GSMA IoT security assessment, AIOTI policy doc and IETF guidance links.
6th March 2018: Added NIST draft report on cybersecurity standardisation in IoT.
14th February 2018: Added IoTSI, NIST and IRTF additional links.
1st February 2018: Updated with the following organisations: ENISA, IoT Alliance Australia, ISAC, New York City, NTIA, Online Trust Alliance, OneM2M, OWASP, Smart Card Alliance, US Food & Drug Administration. Added additional papers section.
24th April 2017: Added additional IoTSF links.
5th December 2016: Added GSMA, Nominet and OLSWANG IoT privacy links as well as AIOTI security link.

24th November 2016: Added GSMA self-assessment checklist, Cloud Security Alliance research paper, Symantec paper and AT&T CEO’s guide.

 

Dead on Arrival? What’s next for IoT security?

IoT security is in the news again and it is pretty grim reading. The DynDNS distributed denial of service (DDoS) attack caused many major websites to go offline. Let’s be clear – there are many security companies who have suddenly dumped all the insecure webcams and routers that have been out there for years into the new world of the Internet of Things. It is semantic perhaps, but I think somewhat opportunistic because much of the kit is older and generally not your new-to-market IoT products. There is however a big issue with insecure IoT products being sold and if not today, tomorrow will bring further, much worse attacks using compromised IoT devices across the world.

We’re at the stage where we’re connecting more physical things and those things are often quite weak from a security point of view. It appears that it has only just occurred to some people that these devices can be harnessed to perform coordinated attacks on services companies and people rely on (or individuals in the case of Brian Krebs).

I fully agree with Bruce Schneier and others who have said that this is one area where government needs to step in and mandate that security needs to be baked in rather than half-baked. The market isn’t going to sort itself out any time soon, but mitigation, both technical and non-technical can be taken in the interim. This does not mean that I am expecting marks or stickers on products (they don’t work).

There are some quite straightforward measures that can be requested before a device is sold and some standards and recommendations and physical technology is available to create secure products. Some of the vulnerabilities are simply unforgivable in 2016 and the competence of these companies to be able to sell internet connected products at all has to be questioned. Those of us who are in industry often see the same companies time and time again and yet nothing ever really happens to them – they still go on selling products with horribly poor levels of security. The Mirai botnet code released in September targets connected devices such as routers and surveillance cameras because they have default passwords that have not been changed by the user / owner of the device. We all know what they are: admin, admin / admin, password and so on. https://www.routerpasswords.com/ has a good list. With Mirai, the devices are telnetted into on port 23 and hey presto, turned around for attack.

I did notice that there is an outstanding bug in the Mirai code to be resolved however, on github: “Bug: Fails to destroy the Internet #8”

Your company has to have a security mindset if you are creating a connected product. Every engineer in your organisation has to have security in mind. It is often easy to spot the companies that don’t if you know what you are looking for.

Is there another way?

At the grandly titled World Telecommunications Standardization Assembly (WTSA) starting next week in Tunisia, many countries are attempting to go further and introduce an alternative form of information management based around objects at the International Telecommunication Union (ITU) (the so-called Digital Object Architecture (DOA) technology). Some want this to be mandated for IoT. It is worth having a look at what is being proposed because we are told that the Digital Object Architecture is both secure and private. Great, surely this is what we need to help us? Yet, when we dive a bit deeper, that doesn’t seem to be the case at all. I won’t give chapter and verse here, but I’ll point to a couple of indicators:

According to information handle.net, the DOA relies on proprietary software for the handle system which resolves digital object identifiers. Version 8.1 released in 2016 has some information at: https://www.handle.net/download_hnr.html where we discover that:

• Version 8 will run on most platforms with Java 6 or higher.

A quick internet search reveals that Java 6 was released in 2006 and reveals plenty of issues. For example “Java 6 users vulnerable to zero day flaw, security experts warn” from 2013. This excerpt from the articles states “While Java 6 users remain vulnerable, the bug has been patched in Java 7. Java 6 has been retired, which means that updates are only available to paying clients.”

Another quick internet search discovers “cordra.org”. Cordra is described “as a core part of CNRI’s Digital Object Architecture”. In the technical manual from January 2016 on that site, we find information on default passwords (login: admin, password: changeit).

“Cordra – a core part of the Digital Object Architecture” – default passwords

If it looks bad, it usually is.

These things are like canaries – once you see them you end up asking more questions about what kinds of architectural security issues and vulnerabilities this software contains. What security evaluation has any of this stuff been through and who are the developers? Who has tested it at all? I’ll come back to the privacy bit at a future date.

The Digital Object Architecture is not secure.

Don’t kid yourself that the DOA is going to be any more resilient than our existing internet – the documentation also shows it is based on the same technologies we rely on for our existing internet: PKI based security, relying on encryption algorithms that have to be deprecated and replaced when it gets broken. I’m not sure how it would hold up against a DDoS attack of any sort. What this object based internet seems to give us though is a license. There are many interesting parts to it, including that it seems that CNRI can now kill the DOA at will just by terminating the license:

“Termination: This License Agreement may be terminated, at CNRI’s sole discretion, upon a material breach of its terms and conditions by Licensee.”

So would I use this for the Internet of Things?
No! I’ve touched the tip of the iceberg here. It seems fragile and flaky at best, probably non-functioning at worst. Let’s be honest – the technology has not been tested at scale, it currently has to deal with a small 100s of thousands of resolutions, rather than the billions the internet has to. I can’t imagine that it would have been able to handle “1.2 terabits per second of data“. Operating at internet scale is a whole different ball game and this is what some people just don’t get – incidentally the IETF members pointed this out to CNRI researchers back in the early 2000s on the IETF mailing lists (I will try to dig out the link at some point to add here).

Summary

Yes, we need to get better, but let’s first work together and get on the case with device security. We also need to get better at sinkholing and dropping traffic which can flood networks through various different means, including future measures such as protocol re-design. Some people have said to just block port 23 as an immediate measure (blocking telnet access). There’ll be many future attacks that really do use the Internet of Things but that doesn’t mean we have to tear up our existing internet to provide an even less secure, untested version with the DOA. The grass is not always greener on the other side.

Some more links to recommendations on IoT security can be found below:

Other bodies are also doing work on security but at an earlier stage including the W3C’s Web of Things working group
Edit: 30/10/16 – typos and added IETF list


Introducing the work of the IoT Security Foundation

At Mobile World Congress this year, I agreed to give an interview introducing the IoT Security Foundation to Latin American audiences. If you’re interested in IoT security and our work at the Foundation, you should find this video interesting. Enjoy!

IoT Security from Rafael A. Junquera on Vimeo.

 

Improving IoT Security

I am involved in a few initiatives aimed at improving IoT security. My company wrote the original IoT security strategy for the GSMA and we have been involved ever since, culminating in the publication of a set of IoT Security Guidelines which can be used by device manufacturers through to solution providers and network operators. Here’s a short video featuring me and other industry security experts explaining what we’re doing.

There’s still a long way to go with IoT security and we’ve still got to change the “do nothing” or “it’s not our problem” mindset around big topics like safety when it comes to the cyber physical world. Each step we take along the road is one step closer to better security in IoT and these documents represent a huge leap forward.

When the “Apple Encryption Issue” reached Piers Morgan

How can we have an intelligent and reasoned debate about mobile device forensics?

I woke up early this morning after getting back late from this year’s Mobile World Congress in Barcelona. It has been a long week and I’ve been moderating and speaking at various events on cyber security and encryption throughout the week. It won’t have escaped anyone’s notice that the “Apple encryption issue” as everyone seems to have referred to it, has been at the top of the news and I have been asked what I think pretty much every day this week. Late last night, I’d seen a twitter spat kicking off between comedy writer and director Graham Linehan and Piers Morgan on the topic, but went to bed, exhausted from the week.

It was still being talked about this morning. My friend Pat Walshe who is one of the world’s leading mobile industry privacy specialists, had quoted a tweet from Piers Morgan:

//platform.twitter.com/widgets.js

Ironically, Piers Morgan himself has been accused of overseeing the hacking of phones, something which he has repeatedly denied, despite Mirror Group Newspapers admitting that some stories may have been obtained by illegal means during his tenure and having recently paid compensation to victims of phone (voicemail) hacking, a topic about which I have written in the past.
This week I’ll be up at York St John University where they’ve asked me to teach cyber security to their undergraduate computer scientists. The reason I agreed to teach there was because they highly value ethical concerns, something which I will be weaving into all our discussions this week. The biggest question these students will have this week will be the “what would you do?” scenario in relation to the San Bernadino case.
The truth is, this is not a question of technology engineering and encryption, it is a question of policy and what we as a society want and expect.
The moral aspects have been widely debated with Apple’s Tim Cook bringing, in my view, the debate to a distasteful low by somehow linking the issue to cancer. I’ve tried to stay out of the debate up until now because it has become a circus of people who don’t understand the technical aspects pontificating about how easy it is to break into devices versus encryption activists who won’t accept anything less than “encrypt all the things” (some of whom also don’t understand the technical bits). I sincerely hope that there isn’t a backlash on me here from either side for just voicing an opinion, some friends of mine have deliberately stayed quiet because of this – I’m exercising my right to free speech and I hope people respect that.
The truth is, this is not a question of technology engineering and encryption, it is a question of policy and what we as a society want and expect. If a member of my family is murdered do I expect the police to be able to do their job and investigate everything that was on that person’s phone? Absolutely. Conversely, if I was accused of a crime that I didn’t commit and I wasn’t in a position to handover the password (see Matthew Green’s muddy puddle test), would I also want them to do it? Of course. It is called justice.

Dealing with the world as it is

The mobile phones and digital devices of today replace all of our previous scraps of notepaper, letters, diaries, pictures etc that would have been left around our lives. If someone is murdered or something horrific happens to someone, this information could be used to enable the lawful investigation of a crime. The Scenes of Crime Officer of the past and defence team would have examined all of these items and ultimately present the evidence in court, contributing to a case for or against. Now consider today’s world. Everything is on our phone – our diaries and notes are digital, our pictures are on our phones, our letters are emails or WhatsApp messages. So in the case of the scene of a crime, the police may literally be faced with a body and a phone. How is the crime solved and how is justice done? The digital forensic data is the case.
Remember, someone who has actually committed a crime is probably going to say they didn’t do it. The phone data itself is usually more reliable than witnesses and defendant testimony in telling the story of what actually happened and criminals know that. I’ve been involved with digital forensics for mobile devices in the past and have seen first-hand the conviction of criminals who continually denied having committed a serious crime, despite their phone data stating otherwise. This has brought redress to their victim’s families and brought justice for someone who can no longer speak.
There is no easy answer

On the other side of course, we’re carrying these objects around with us every day and the information can be intensely private. We don’t want criminals or strangers to steal that information. The counter-argument is that the mechanisms and methods to facilitate access to encrypted material would fall into the hands of the bad guys. And this is the challenge we face – there is absolutely no easy answer to this. People are also worried that authoritarian regimes will use the same tools to help further oppress their citizens and make it easier for the state to set people up. Sadly I think that is going to happen anyway in some of those places, with or without this issue being in play.

US companies are also fighting hard to sell products globally and they need to recover their export position following the Snowden revelations. It is in their business interests to be seen to fight these orders in order to s
ell product. It appears that Tim Cook wants to reinforce Apple’s privacy marketing message through this fight. Other less scrupulous countries are probably rubbing their hands in glee watching this show, whilst locally banning encryption, knowing that they’ll continue doing that and attempting to block US-made technology whatever the outcome of the case.
Hacking around

Even now, I have seen tweets from iPhone hackers who are more than capable of an attempt to solve this current case and no doubt they would gain significant amounts financially from doing so – because the method that they develop could potentially be transferable.

//platform.twitter.com/widgets.js
This is the same battle that my colleagues in the mobile world fight on a daily basis – a hole is found and exploited and we fix it; a continual technological arms race to see who can do the better job. Piers Morgan has a point, just badly put – given enough time, effort and money the San Bernadino device and encryption could be broken into – it will just be a hell of a lot. It won’t be broken by a guy in a shop on Tottenham Court Road (see my talk on the history of mobile phone hacking to understand this a bit more).

Something that has not been discussed is that we also have a ludicrous situation now whereby private forensic companies seem to be ‘developing’ methods to get into mobile handsets when in actual fact many of them will either re-package hacking and rooting tools and pass them off as their own solutions, as well as purchasing from black and grey markets for exploits, at premium prices. This is very frustrating for the mobile industry as it contributes to security problems. Meanwhile, the Police are being forced to try and do their jobs with not just one hand tied behind their back, it now seems like two. So what should we do about that? What do we consider to be “forensically certified” if the tools are based on fairly dirty hacks?
How do we solve the problem?
We as democratic societies ask and expect our Police forces to be able to investigate crimes under a legal framework that we all accept via the people we elect to Parliament or Senate. If the law needs to be tested, then that should happen through a court – which is exactly what is happening now in the US. What we’re seeing is democracy in action, it’s just messy but at least people in the US and the UK have that option. Many people around the world do not.
On the technical side, we will need to also consider that there are also a multitude of connected devices coming to the market for smart homes, connected cars and things we haven’t even thought of yet as part of the rapidly increasing “Internet of Things”. I hate to say it, but in the future, digital forensics is going to become ever more complex and perhaps the privacy issues for individuals will centre on what a few large technology companies are doing behind your back with your own data rather than the Police trying to do their job with a legal warrant. Other companies need to be ready to step up to ensure consumers are not the product.
I don’t have a clear solution to the overall issue of encrypted devices and I don’t think you’ll thank me for writing another thousand words on the topic of key escrow. Most of the time I respond to people by saying it is significantly complex. The issues we are wrestling with now do need to be debated, but that debate needs to be intellectually sound and unfortunately we are hearing a lot from people with loud voices, but less from the people who really understand. The students I’m meeting next week will be not only our future engineers, but possibly future leaders of companies and even politicians so it is important that they understand every angle. It will also be their future and every other young person’s that matters in the final decision over San Bernadino.

Personally, I just hope that I don’t keep getting angry and end up sat in my dressing gown until lunchtime writing about tweets I saw at breakfast time.

 

Exploring Threats to IoT Security

I was recently invited to give a talk on the threat landscape of IoT at Bletchley Park on IoT Security as part of NMI’s IoT Security Summit. Of course you can only touch the surface in 30 minutes, but the idea was to give people a flavour of the situation and to point to some potential solutions to avoid future badness. My company, Copper Horse is doing a lot of work on this topic right now and it is pretty exciting for us to be involved in helping to secure the future for everyone and every thing, right across the world.

If you’re thinking about developing an IoT product or service and need some help with securing it, do feel free to get in touch with us.