The Real World is Not Real: Manipulation of Data and People in the Future Internet

On the 24th of January 2018 I gave my professorial lecture at York St John University in the UK. I decided to choose the subject of fakeness: the manipulation of data by attackers and its potential impact on people and machines if trusted. The transcript is below. I’ve added in additional links so you can see sources and informative links for further reading. I hope you enjoy the read, feel free to leave comments or tweet me.

 

The Real World is Not Real: Manipulation of Data and People in the Future Internet
David Rogers

Introduction

Thank you all for coming today to my inaugural lecture. I am honoured and humbled both to be in this position and that you are here today to listen. I’m also particularly grateful to Dr. Justin McKeown for engaging me in the first place with such a great university and for our Vice Chancellor Karen Stanton’s support.

We have been on a steady path towards more advanced artificial intelligence for a number of years now. Bots in the software sense have been around for a long time. They’ve been used in everything from online gaming to share dealing. The AI in computer chess games has been easily able to beat most users for many years. We still however have a long way to go towards full sentience and we don’t even fully understand what that is yet.

In the past couple of years we have seen the widespread use of both automation and rudimentary AIs in order to manipulate people, particularly it seems in elections.

Manipulation of people

I hate to use the term fake news, but it has taken its place in world parlance. The United States Senate is currently investigating the use of trolls by Russia in order to manipulate the course of the 2016 presidential election. Through the use of Twitter accounts and Facebook advertising, a concerted attempt was made to influence opinion.

Source: https://www.huffingtonpost.co.uk/entry/russian-trolls-fake-news_us_58dde6bae4b08194e3b8d5c4

Investigative journalist Carole Cadwalladar published a report in May 2017 entitled “The great British Brexit robbery: how our democracy was hijacked” which is an interesting read. While I should note that the report is the subject of legal complaints by Cambridge Analytica, there are certainly interesting questions to be asked as to why so much of the Leave campaigns’ money was ploughed into online targeting. The ICO is currently investigating how voters’ personal data is being captured and used in political campaigns.

Whatever the case, it seems that humans can easily be influenced and there are many academic papers on this subject.

It also seems that some technology companies have become unwittingly duped into and have actually profited from undermining free and fair elections by the manipulation of their targeted advertising at certain segments of populations. This represents a significant threat to democracy and is still ongoing right now where elections are taking place.

Fake news is not a new thing. In one example from the 2nd world war, the Belgian resistance created “Le Faux Soir” to replace the newspaper “Le Soir” and distributed slightly before the normal deliveries arrived. The faux Soir had lots of stories saying how badly the war was going for the Germans and nearly all of the copies were taken before anyone realised. Le Soir’s modern version has been attacked since then; by IS supporters in 2015 although like many hacks on media sites, it was more a form of defacement.

Source: https://jonmilitaria44.skyrock.com/2815465118-Le-FAUX-Soir.html

What I’m really interested in at the moment is the next stage of all of this; the manipulation of data and therefore machines that use or rely on it.

Disconnecting real world objects and modification of real world things

The Internet of Things or IoT is something that I take a particular interest in and I have spent a lot of time looking at different aspects of security. We’ve had lots of interesting attacks on IoT and the so-called cyber-physical world on everything from children’s toys to power plants.

Whilst some attacks are entirely logical and we generally threat model for them, the increasing connectedness of the world has meant that “lone wolf” attackers have the potential to amplify their attacks in such a way that they could not before. Some people used to ask me in talks about why people would want to do things like attack remote insulin pumps or pacemakers. I would respond with the question, “why do people put glass in baby food?”

The tampering and contamination of foods has happened on a regular basis. The psychology behind such attacks is surely one of the powerless gaining power; in those cases often causing a product recall, financial harm and embarrassment to a disgruntled employee’s company or, in the biggest case in the UK in 1989, to extort money from the manufacturer.

In the Internet of Things, you often hear about these types of threats in terms of catastrophic impact, for example “stopping all the cars on a motorway”, “all the diabetics in the US”, “anyone with a pacemaker” or “poisoning water supplies”. Threats are made up of a combination of Intent, Capability and Opportunity. Without any one of the three, an attack is unlikely to be successful. One side effect of the IoT is that it is giving a new breed of attackers opportunity.

Connecting my house to the internet gives someone in another country the theoretical ability to
watch webcams in my house – if you don’t believe me, have a look at a tool called Shodan (the capability) and search for webcams (your intent).

In January 2018, Munir Mohammed and his pharmacist partner were convicted of planning a terrorist attack. Of some concern, it was also reported that he had been researching Ricin and he had a job making sauces for ready meals for Tesco and Morrisons in a food factory. The media made a strong case that there was a link between his work there although there was no evidence to that effect. It does raise a frightening prospect however; the potential for an intent of a different sort – not the traditional type of disgruntled employee wishing to taint food and scare parents, but a motivation caused by radicalisation. An attacker of this nature is eminently more dangerous than the disgruntled employee.

The employee intends to harm the employer rather than a child (in most cases), whereas the terrorist actually intends to kill the people who consume the product through pure hatred (in the case of ISIS). It is therefore entirely conceivable that this could be a high impact, but low likelihood risk that needs to be considered in the Internet of Things.

There have been incidents pointing towards potentially catastrophic future events. In 2016, the US mobile network operator Verizon reported that a hacktivist group linked to Syria managed to change valves which controlled the levels of chemicals in tap water, releasing dangerous levels of the chemical into the water supply. The scary part of this is that anyone with an IP connection, in any part of the world, with the intent, capability and opportunity could theoretically execute this on a number of connected water treatment systems. So it becomes a different numbers game – the number of potential attackers rises dramatically (many with different motivations), as does the amount of system exposure.

Many of these industrial systems run on very old equipment which is not segregated and can easily be reached via the internet. There are other examples too – in 2014, the German Federal Office of Information Security published details of a hack which caused serious damage to a blast furnace in a steel mill, after a worker was compromised via a phishing email.

What I’m trying to say here is that people with ill intentions whether it be nation states or terrorists could and are in some cases attempting to attack real world infrastructure using our connectedness and technology to achieve their aims.

Inventing the real world

Most of the attacks we have talked about are targeting and modification of real world things via the internet – from the inside out. But what about from the outside in? What if the real world wasn’t real, as presented to humans and other systems?

A couple of weeks ago, Oobah Butler, posted his article on how he managed to make his fake restaurant “The Shed At Dulwich” the most popular restaurant in London. He had the idea after he had previously been paid £10 for each positive comment he gave real restaurants, transforming their fortunes. Eventually Oobah decided to open the shed for real for one night and ended up hiring in actors and serving up microwave meals to the guests – some of who tried to book again! The restaurant has been deleted from Tripadvisor now, but I managed to get a cached version of it as you can see.

The Shed at Dulwich (Google Cache)

Fake reviews are of course extremely common on the internet. A whole industry of “click farms” in low income countries has grown up to both generate clicks to generate advertising revenue and to provide 5 star reviews and comments of products, applications and services over the internet. These fake clicks have been provided by software “bots” and through virtual machine emulations of smartphones. As detection gets better, humans are engaged in this activity including providing text comments on blogs and reviews.

Source: https://www.mirror.co.uk/news/world-news/bizarre-click-farm-10000-phones-10419403

Major retail companies are employing AI “chatbots” to respond to Twitter or Facebook complaints, with customers engaging and becoming enraged by their responses when they get it wrong, not realising that they’re not talking to a human being. In the case of Microsoft’s machine learning chatbot Tay, it made racist and nazi comments in less than a day based on things that were said to it and the dataset it was using for training.

You can see that the impression of a real world is already being significantly harmed by automation controlled by people with less than fair intentions. But what about truly trying to fake real world data in order to make humans take action, or not take action when they need to?

Faking real world data

I will give you a simple example which I show to some of my IoT security students. There is a popular TV and movie trope about spoofing the information from CCTV cameras.

Tvtropes.org talks about two basic examples – “the polaroid punk” – where a picture is taken and put over the camera while the hero goes about his business (you may remember this from the Vatican scene in “Mission Impossible 3”) and the second being “the splice and dice” – looping camera footage (which you may remember from the film “Speed”).

In my version of this trope, instead of doing this with CCTV imagery, we use the same trope against IoT data going back to a console. The specific example I chose was Connected Agriculture. There are some very expensive crops such as Almonds or Pistachios that will lose their crop if starved of water and could potentially kill the trees. It can take between 5 and 12 years for new trees to produce so ir
rigation has to be managed carefully.

As farms increasingly rely on the Internet of Things to provide accurate sensing and measurement, as well as control, it is entirely conceivable that this could be attacked for a variety of reasons. An attack could involve quietly monitoring the data being sent by sensors back to the farm, then stealthily taking over that reporting with fake data. Once the hijack has taken place, the attacker could reduce the irrigation or stop it entirely, as long as the farmer is not aware. With data being sent back that looks real, it could be a long time before the attack is discovered and it may never be detected. It may not even need to take place at the sensor end, it may be enough to hijack the reporting console or IoT hub where the data is aggregated.

With thousands of acres of land to manage, there is an increasing reliance on the screen of reported data in the office, rather than direct inspection on the ground to corroborate that the data is indeed real. Food security is a real issue and it is well within the means of nation states to execute such attacks against other countries or for the attack to be a targeted against in order to manipulate the market for Almonds.

Of course this is just an example, but the integrity of eAgriculture is an example of something which can be directly linked to a nation’s security.

In the military and intelligence domain, countries around the world are taking a more active and forthright approach to cyber security rather than just defending against attacks. This is also known as “offensive cyber”. In December 2017, AFP reported that Colonel Robert Ryan of a Hawaii based US Cybercom combat team had said the cyber troops’ mission is slightly different to the army’s traditional mission. He said: “Not everything is destroy. How can I influence by non-kinetic means? How can I reach up and create confusion and gain control?”. The New York Times had previously reported that Cybercom had been able to imitate or alter Islamic State commanders’ messages so that they directed fighters to places where they could be hit by air strikes.

Just yesterday, the UK’s Defence Chief of General Staff, Nick Carter said that the UK needs to keep up with unorthodox, hybrid warfare encompassing cyber attacks.

From the attacks that have been attributed to nation states, many of these have attacked civilian infrastructure, some of them sophisticated, others not so.

A colleague at Oxford University, Ivan Martinovic has written about the issues with air traffic control data being modified in different types of systems. Many of these systems were created many years ago and the authors of a joint paper in 2016 on “Perception and Reality in Wireless Air Traffic Communications Security” describe the threat model as “comparatively naïve”. In the same paper, they asked both Pilots and Air Traffic Controllers, their positions in hypothetical scenarios. 10.75% of the pilots surveyed said they didn’t know the effect if wrong label indications were shown up on their in-flight Traffic Collision and Avoidance Systems screens.

These systems are designed to prevent mid-air collisions. 83.3% of air traffic controllers surveyed said there would be a major loss of situational awareness if information or whole targets were selectively missing from an air traffic control radar screen. It is not difficult to imagine the impact of removing planes from either pilot or air traffic controller screens or the alternative which is to flood them with data that doesn’t exist, particularly in a busy area like Heathrow and during bad weather. The panic and loss of awareness in situation like that could cause very severe events. Other such attacks have been theorised against similar systems for ships at sea.

The future danger that we face from this kind of underlying vulnerability is that the person at the computer or controls in the future will not be a person, it will be another computer.

Disconnection from the matrix

Maybe all it takes is an attack to disconnect us from the matrix entirely.

In December 2017, a number of warnings were aired in the media and at events that the Russians may try to digitally starve the UK by cutting the fibre optic cables that connect us to the rest of the world. This is not a unique threat to us; all the countries of the world rely on the interconnectedness of the internet. Some landlocked countries are really at the mercy of some of their neighbours when it comes to internet connections and it can be used as a political weapon in the same ways that access to water and the building of dams is and has been for a number of years.

The Battle for the Atlantic in 1941 was all about cutting off supplies to Britain through U-Boat warfare.

Telecommunications cable-cutting has long been an issue in warfare but for traditionally different reasons. Barbara Tuchman’s book, The Zimmermann Telegram explains that in 1914, on the eve of the Great War, sealed orders were opened at Cable & Wireless, directing them to cut and to remove a portion of submarine cable in the English Channel. This forced the Germans to use insecure wireless for their messaging.

The situation for submarine cables today is different. Our whole economies are dependent on the resilience of cable which is a few inches wide. The original UK National Cyber Security Strategy in 2011 stated that around 6% of UK GDP was generated by the internet, but this probably does not capture the true nature of how human beings live their lives today and the downstream reliance on everything from purchasing goods and services to supply chain management, communication and transportation as well as increasing government services for welfare and taxation; the list is almost endless. Each and every one of these touch points with the internet has an onward real world effect.

The back-end of many of these systems are cloud computing services, many of which are hosted in other countries with very little domestic UK infrastructure to support going it alone. Our reliance is based on a globalised world and it is increasing; just as the world powers shift to a more parochial, protectionist approach.

The concept of “digital sovereignty” is something that governments around the world are happy to promote because it makes them feel that they’re more in control. Indeed the Russians themselves had stress-tested their own networks to check their preparedness and resilience. They failed their own tests, setting themselves on a path to hosting everything in their own country and further balkanising the internet. Russia’s foreign policy in this respect is clear. Foreign Minister Sergey Lavrov has repeatedly stated that Russia wishes to see “control” of the internet through the UN. The country has a long-held paranoia about perceived western control of the internet and wishes to redress the balance of power.

Economic data, the 24 hour news cycle and market shocks

 

Source: https://www.telegraph.co.uk/finance/markets/10013768/Bogus-AP-tweet-about-explosion-at-the-White-House-wipes-billions-off-US-markets.html

In April 2013, the Associated Press’s Twitter account caused a brief, massive stock plunge apparently amounting to 90 billion pounds when it tweeted: “Breaking: Two Explosions in the White House and Barack Obama is injured.”.

The situation was quickly dealt with and the market recovered quickly as the White House immediately clarified the situation, but this demonstrates the impact and influence that a trusted news source can have and the damage that can happen when it is hacked. After the event, AP staff admitted that there had been a targeted phishing campaign against them but it is unclear who the attacker was. Other news organisations have also been regularly targeted in similar ways.

Many automated systems rely on Twitter as a source of “sentiment” which can be linked to machine learning algorithms for stock market forecasting with sentiment analysis. As such, the sell-off in many cases may have been automated, particularly as the “news polarity” would have been negative.

An interesting paper on the subject of news headlines and their relative sentiment was published in March 2015 by the University of Minas Gerais in Brazil and the Qatar Computing Research Institute. The paper serves as a good description of how scores can be attributed to headlines of breaking news article tweets as to its news polarity.

Source: https://arxiv.org/pdf/1503.07921.pdf

As an aside, they showed that around 65% of Daily Mail articles were negative, but I guess you knew that anyway!

Source: https://arxiv.org/pdf/1503.07921.pdf

Amplification of events by stock market algorithms have been shown in other events. The 2010 “Flash Crash” caused 500 million dollars of losses in around half an hour. Part of the reason for this was the trading of derivatives – ETFs – Exchange Traded Funds which form baskets of other assets. The value of the ETFs were disconnected from the underlying value of the assets.

Navinder Singh Sarao, also known as the “Hound of Hounslow” modified automated software to enable him to make and cancel trades at a high frequency which would drive prices in a particular direction. The US Justice department claimed he made 26 million pounds over 5 years. There was another Flash Crash in August 2015 caused by such high frequency trades. By this time, “Circuit breakers” had been installed – these caused the stock market to halt over 1200 times in one day.

The recent “This is not a drill” missile alert in Hawaii which was accidentally sent by an employee was quickly reversed, but not before many people had panicked. That event happened on a Saturday and it is likely that it would have had some impact on the financial markets had it been during the week. It is not difficult to imagine the havoc that could have been caused if such a system had been compromised through a cyber attack. Indeed it is likely that that this potential has been noted by North Korea.

Source: Twitter

Machine-led attacks

In 2016, at DEFCON in Las Vegas, which is the world’s biggest hacking conference, I was a witness to history. The DARPA Grand Challenge final of the world’s first all-machine hacking tournament took place using competing teams of supercomputers. Normally this kind of tournament takes place with competing humans attacking and defending systems. The key element of this is the fact that this represented a point in time in which everything changed. For years now we have seen the evolution of attacks with some level of machine assistance, but this was a whole new ball-game. The speed at which situations changed was beyond the rate at which humans could keep up, bearing in mind that these systems had not seen any of these attacks before.

In one case, a vulnerability known as “Heartbleed” was used and exploited to attack one of the DARPA Grand Challenge supercomputers. It was found, patched and used in an attack against the other machines in a short few minutes. All of this was possible because of the automation, machine learning and artificial intelligence in-built into each computer.

There is a defence aspect to this too. In the future, we need to consider how we best utilise “the machines” for defensive purposes. The technology available to us now and in the future will enable us to use machine learning at the edge of a network to provide intelligent intrusion prevention and security monitoring systems. Our attacks will largely be machine-led, which in a crude way happens now. One can imagine agents at the edge self-healing in a different
way – healing from security attacks and then getting that fix back and out to other elements of the network.

Team Shellphish who finished in third place, even open sourced their code so that other people could develop it further.

The problem for all of us is the constraints on such AIs. If we delegate the ability to respond offensively to agents that can respond more quickly than we can, what bounds do we put in place around that? Does the adversary play by the same ruleset? What if they break the rules? Research using AI to learn different types of games has often put rules in place to make the AI follow the rules of the game, rather than cheat (for example with DeepMind), but it could be programmed to break as many rules as possible in order to succeed, for example to get information from the underlying operating system to win the game.

Then we have AI versus humans. In some respects humans have the advantage – for example, we often have the ability to disconnect – to remove power or to disengage from a computer system. The AI has physical bounds but they may also be protected from attack by its designer.

As has been seen in games that have been learned by existing, adaptable artificial intelligence, once the rules are fully understood, AIs can easily beat human players.

In September 2017, Russian President Vladimir Putin stated in a speech that “Artificial intelligence is the future, not only for Russia, but for all humankind,”. “It comes with colossal opportunities, but also threats that are difficult to predict. Whoever becomes the leader in this sphere will become the ruler of the world.”

Elon Musk, the founder of Tesla and amongst a growing number of people concerned about the rise of AI and particularly about its military uses tweeted in response “Competition for AI superiority at national level mostly likely cause of WW3 imo”.

A letter was sent to the UN in August 2017 which was signed by the founders of 116 AI and robotics companies across the world called for the ban of autonomous weapons which would “permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”. It is a sad fact that an arms race is already underway, with some autonomous weapons already deployed.

Artificial intelligence will employed in many different domains, not just military and cyber.

AI in economics

In economics, artificial intelligence offers humanity some positives in that the complexities of particular problems can be addressed by systems, but all of this relies on good and valid input data. It also relies on the political viewpoint that is programmed into such systems. Transparency in such AIs is going to be crucial when it comes to implementation of policy.

The use of computers for economic modelling has a long history. If you’re interested in this topic, the documentary maker Adam Curtis has made some excellent programmes about it, including the series Pandora’s Box. Most of these attempts have failed – the soviet, paper-based planned system ended up with ludicrous situations such as trains full of goods going all the way to Vladivostok and being turned around and sent back to Moscow. In the United States, science and data based solutions to national problems such as the risk of nuclear war, via the RAND Corporation, using Von Neumann’s Game Theory and Wohlstetter’s “Fail-Safe” strategies, but they didn’t always work. Defence Secretary Robert McNamara’s reliance on data coming back from the ground in the Vietnam war undoubtedly led to bad decisions due to the lack of quality in the data. The same is true today. Datasets are often unreliable and inaccurate but much more so if they have been deliberately altered.

Revolutions will need to take place in the way in which data is gathered before government systems will ever be accurate and reliable; and to do so will also amount to a massive breach of citizen privacy. However, it could be the case that some corporations are already sat on much of the data. In 2008 Google announced in a paper in the journal Nature that they were able to “nowcast” or predict flu outbreaks based on searches for symptoms and medical products. They were able to accurately estimate this two weeks earlier than the US Center for Disease Control. Google Flu Trends ran for a few years, but was shown to fail in a big way in 2013. Corporate systems that operate on such sets of “big data” are often opaque. This can lead to deliberate bias or inaccurate outcomes. In Google’s case the AI they use has also recently been found to be racist in picture identification.

The UK’s Office of National Statistics operates under a Code of Practice for Official Statistics, which statistics must be compliant with (and is currently under revision). This is also consistent with the UN’s Fundamental Principles of Official Statistics and the European Code of Practice on the subject. It is designed such that the statistics that are produced are honest and can’t be manipulated to suit a particular government’s views or objectives.

In the future, governments will need to ensure that artificial intelligence based systems operating on the statistics to steer policy are compliant with such codes of practice, or a new codes adopted.

What effect would the manipulation of national statistics have? If governments were making decisions based on falsified or modified data, it could have a profound financial, economic and human impact. At the moment, the safeguards encoded in the Code of Practice rely on the integrity of individuals and their impartiality and objectivity.

As we increasingly rely on computers to make those recommendations, what assurance do we have that they operate in-line expectations and that they have not been compromised? In theory, a long-term adversary could create a completely different national picture entirely by slightly skewing input data to national statistics in particular areas meaning that they bear no relation to the real world.

When algorithms go wrong

AIs will co-exist with other AIs and this could generate some disturbing results.

During Hurricane Irma in September 2017, automatic pricing algorithms of airline tickets caused prices to rise in-line with demand, resulting in exorbitant demands on evacuating residents. Such “yield management systems” are not designed to handle such circumstances and have no concept of ethics or situational awareness. Airlines were forced to manually backtrack and cap prices manually after a twitter backlash.

Source: Twitter @LeighDow

In 2011, two third-party Amazon merchants ended up in an automated pricing war over a biology book because of competing algorithms. The pricing reached over 23.5 million dollars for a book that was worth about 100 dollars. The end results of such situations where there is interaction with multiple other algorithms which may rely on that algorithmic output data may be non-deterministic and may be catastrophic where physical real world effects take place based on these failures. Humanity could be put in a position where they cannot control the pace of these combined events and widespread disruption and destruction could take place in the real world.

Source: https://www.michaeleisen.org/blog/?p=358

The human rebellion

As machines start to take over complex functions such as driving, we have already started to see humans rebel.

Artist James Bridle demonstrated this by drawing an unbroken circle within a broken one, representing a line a car could cross. This created a situation which a self-driving car could theoretically enter but couldn’t get out of – a trap. This of course was art, but researchers at a group of US universities worked out how to trick sign-recognition algorithms simply by putting stickers on stop signs.

Source: https://www.autoblog.com/2017/08/04/self-driving-car-sign-hack-stickers/

It is certainly likely that this real-world gaming of artificial intelligence is going to happen and at least in the early days of AI, it will probably be easy to do.

Nudge and Dark Patterns

Governments around the world are particular fans of “nudge techniques”; imagine if they were able to design the AI to be adaptable with nudge – or social engineering as we call it in the security world? What I see with technology today is a lot of bad nudge – manipulation of smartphone or website users into behaviours that are not beneficial to them, but are beneficial to the system provider. Such techniques are widespread and are known as “dark patterns”, a concept originally conceived by user experience expert Harry Brignull. So what is to stop these techniques being inserted into or used against artificial intelligence in order to game the user or to nudge the system in a different direction? Some of this used to be known as “opinion control” and in countries like China the government is very keen to avoid situations like the Arab Spring which would bring chaos to the country. Maintaining stability and preventing free-thinking and freedom of expression are generally the order of the day and AI will assist them in their aims.

The danger of the Digital Object Architecture

One solution that has been proposed to the problem of falsified data on the internet is to use something called the Digital Object Architecture. In theory this technology would allow for the identification of every single object and allow for full traceability and accountability. It is difficult to imagine, but a future object-based internet could make the existing internet look minuscule.

It is a technology being promoted by a specialised UN agency called the ITU. They are meeting at this moment talking about it. The main proponents are authoritarian regimes such as China, Russia and Saudi Arabia. This is because as I mentioned earlier, it brings “control” to the internet. This is clearly the polar opposite of freedom. It is an extremely pervasive technology in theory, which is also largely untested.

Shortly before Robert Mugabe’s detention in Zimbabwe, there had been issues with food shortages and concerns about the state of the currency which had triggered panic buying and a currency crash. These issues were genuine, they were not false, but it certainly wasn’t convenient for Zimbabwe or Mugabe. He reportedly announced at a bi-national meeting with South Africa that cyber technology had been abused to “undermine our economies”. The South African communications minister clarified that Mugabe intended to use the Digital Object Architecture. The Minister said “It helps us identify the bad guys. The Zimbabweans are interested in using this.”.

So if we look at the countries and fans of this technology, it seems that it is about controlling messages and about controlling truth. The Digital Object Architecture is being heavily promoted by some countries around the world that appear to both fear their own people as the way to maintain order and control and to repress free-thinking and freedom of expression. This is quite a depressing situation – the technology itself is not necessarily the problem, it is the way that it is used, who controls it and the fact that it affords no ability for privacy or anonymity for citizens.

So we must seek other solutions that do maintain the properties of confidentiality, integrity and authenticity of data in order to support widespread artificial intelligence use in economies, but technology that cannot be abused to repress human individuality and freedom. I have to mention Blockchain here. I have read a number of articles that attempt to paint blockchains as the perfect solution to some of the problems I’ve outlined here in relation to creating some kind of federated trust across sensors and IoT systems. The problems are very real, but I feel that the use of blockchains to solve some of these problems creates a number of very different,
other problems including scalability so they are by no means a panacea.

Conclusion

If we can’t trust governments around world not to abuse their own citizens, and if those same governments are pushing ahead with artificial intelligence to retain and gain greater power, how can we possibly keep AI in check?

Are we inevitably on the path to the destruction of humanity? Even if governments choose to regulate and make international agreements over the use of weaponry, there will always be states that choose to ignore that and make their artificial intelligence break the rules – it would be a battle between a completely ruthless AI and one with a metaphorical hand tied behind its back.

In an age where attribution of cyber attacks is extremely difficult, how can we develop systems that both prevent the gaming of automated systems through “false flag” techniques, masquerading as trusted friends but meddling and influencing decision making? It is entirely possible that attribution may be even more difficult due to the origins of systemic failure in economies being difficult to detect.

How do we manage the “balance of terror” (to use a nuclear age term) between nation states? What will happen when we cannot rely on anything in the world being real?

These are the problems that we need to address now as a collective global society, to retain confidence and trust in data and also the sensors and mechanisms that gather such data. There could be many possible algorithmic straws to break the camel’s back.

Artificial Intelligence is here to stay, but ethical and humane cyber security and robust engineering discipline is a part of keeping it sane. Maybe, just maybe, artificial intelligence will be used for the good of humanity and solving the world’s problems rather than for the last war we’ll ever have.

Thank you.

When the “Apple Encryption Issue” reached Piers Morgan

How can we have an intelligent and reasoned debate about mobile device forensics?

I woke up early this morning after getting back late from this year’s Mobile World Congress in Barcelona. It has been a long week and I’ve been moderating and speaking at various events on cyber security and encryption throughout the week. It won’t have escaped anyone’s notice that the “Apple encryption issue” as everyone seems to have referred to it, has been at the top of the news and I have been asked what I think pretty much every day this week. Late last night, I’d seen a twitter spat kicking off between comedy writer and director Graham Linehan and Piers Morgan on the topic, but went to bed, exhausted from the week.

It was still being talked about this morning. My friend Pat Walshe who is one of the world’s leading mobile industry privacy specialists, had quoted a tweet from Piers Morgan:

//platform.twitter.com/widgets.js

Ironically, Piers Morgan himself has been accused of overseeing the hacking of phones, something which he has repeatedly denied, despite Mirror Group Newspapers admitting that some stories may have been obtained by illegal means during his tenure and having recently paid compensation to victims of phone (voicemail) hacking, a topic about which I have written in the past.
This week I’ll be up at York St John University where they’ve asked me to teach cyber security to their undergraduate computer scientists. The reason I agreed to teach there was because they highly value ethical concerns, something which I will be weaving into all our discussions this week. The biggest question these students will have this week will be the “what would you do?” scenario in relation to the San Bernadino case.
The truth is, this is not a question of technology engineering and encryption, it is a question of policy and what we as a society want and expect.
The moral aspects have been widely debated with Apple’s Tim Cook bringing, in my view, the debate to a distasteful low by somehow linking the issue to cancer. I’ve tried to stay out of the debate up until now because it has become a circus of people who don’t understand the technical aspects pontificating about how easy it is to break into devices versus encryption activists who won’t accept anything less than “encrypt all the things” (some of whom also don’t understand the technical bits). I sincerely hope that there isn’t a backlash on me here from either side for just voicing an opinion, some friends of mine have deliberately stayed quiet because of this – I’m exercising my right to free speech and I hope people respect that.
The truth is, this is not a question of technology engineering and encryption, it is a question of policy and what we as a society want and expect. If a member of my family is murdered do I expect the police to be able to do their job and investigate everything that was on that person’s phone? Absolutely. Conversely, if I was accused of a crime that I didn’t commit and I wasn’t in a position to handover the password (see Matthew Green’s muddy puddle test), would I also want them to do it? Of course. It is called justice.

Dealing with the world as it is

The mobile phones and digital devices of today replace all of our previous scraps of notepaper, letters, diaries, pictures etc that would have been left around our lives. If someone is murdered or something horrific happens to someone, this information could be used to enable the lawful investigation of a crime. The Scenes of Crime Officer of the past and defence team would have examined all of these items and ultimately present the evidence in court, contributing to a case for or against. Now consider today’s world. Everything is on our phone – our diaries and notes are digital, our pictures are on our phones, our letters are emails or WhatsApp messages. So in the case of the scene of a crime, the police may literally be faced with a body and a phone. How is the crime solved and how is justice done? The digital forensic data is the case.
Remember, someone who has actually committed a crime is probably going to say they didn’t do it. The phone data itself is usually more reliable than witnesses and defendant testimony in telling the story of what actually happened and criminals know that. I’ve been involved with digital forensics for mobile devices in the past and have seen first-hand the conviction of criminals who continually denied having committed a serious crime, despite their phone data stating otherwise. This has brought redress to their victim’s families and brought justice for someone who can no longer speak.
There is no easy answer

On the other side of course, we’re carrying these objects around with us every day and the information can be intensely private. We don’t want criminals or strangers to steal that information. The counter-argument is that the mechanisms and methods to facilitate access to encrypted material would fall into the hands of the bad guys. And this is the challenge we face – there is absolutely no easy answer to this. People are also worried that authoritarian regimes will use the same tools to help further oppress their citizens and make it easier for the state to set people up. Sadly I think that is going to happen anyway in some of those places, with or without this issue being in play.

US companies are also fighting hard to sell products globally and they need to recover their export position following the Snowden revelations. It is in their business interests to be seen to fight these orders in order to s
ell product. It appears that Tim Cook wants to reinforce Apple’s privacy marketing message through this fight. Other less scrupulous countries are probably rubbing their hands in glee watching this show, whilst locally banning encryption, knowing that they’ll continue doing that and attempting to block US-made technology whatever the outcome of the case.
Hacking around

Even now, I have seen tweets from iPhone hackers who are more than capable of an attempt to solve this current case and no doubt they would gain significant amounts financially from doing so – because the method that they develop could potentially be transferable.

//platform.twitter.com/widgets.js
This is the same battle that my colleagues in the mobile world fight on a daily basis – a hole is found and exploited and we fix it; a continual technological arms race to see who can do the better job. Piers Morgan has a point, just badly put – given enough time, effort and money the San Bernadino device and encryption could be broken into – it will just be a hell of a lot. It won’t be broken by a guy in a shop on Tottenham Court Road (see my talk on the history of mobile phone hacking to understand this a bit more).

Something that has not been discussed is that we also have a ludicrous situation now whereby private forensic companies seem to be ‘developing’ methods to get into mobile handsets when in actual fact many of them will either re-package hacking and rooting tools and pass them off as their own solutions, as well as purchasing from black and grey markets for exploits, at premium prices. This is very frustrating for the mobile industry as it contributes to security problems. Meanwhile, the Police are being forced to try and do their jobs with not just one hand tied behind their back, it now seems like two. So what should we do about that? What do we consider to be “forensically certified” if the tools are based on fairly dirty hacks?
How do we solve the problem?
We as democratic societies ask and expect our Police forces to be able to investigate crimes under a legal framework that we all accept via the people we elect to Parliament or Senate. If the law needs to be tested, then that should happen through a court – which is exactly what is happening now in the US. What we’re seeing is democracy in action, it’s just messy but at least people in the US and the UK have that option. Many people around the world do not.
On the technical side, we will need to also consider that there are also a multitude of connected devices coming to the market for smart homes, connected cars and things we haven’t even thought of yet as part of the rapidly increasing “Internet of Things”. I hate to say it, but in the future, digital forensics is going to become ever more complex and perhaps the privacy issues for individuals will centre on what a few large technology companies are doing behind your back with your own data rather than the Police trying to do their job with a legal warrant. Other companies need to be ready to step up to ensure consumers are not the product.
I don’t have a clear solution to the overall issue of encrypted devices and I don’t think you’ll thank me for writing another thousand words on the topic of key escrow. Most of the time I respond to people by saying it is significantly complex. The issues we are wrestling with now do need to be debated, but that debate needs to be intellectually sound and unfortunately we are hearing a lot from people with loud voices, but less from the people who really understand. The students I’m meeting next week will be not only our future engineers, but possibly future leaders of companies and even politicians so it is important that they understand every angle. It will also be their future and every other young person’s that matters in the final decision over San Bernadino.

Personally, I just hope that I don’t keep getting angry and end up sat in my dressing gown until lunchtime writing about tweets I saw at breakfast time.