Last year the Edinburgh branch of the British Computing Society (BCS) invited me to give the Sidney Michaelson Memorial Lecture at the 2017 edition of the Edinburgh International Science Festival (EIFS). I felt honoured, as Sidney was the first professor to be appointment by the Computer Science department at the University Edinburgh. I was also grateful to be given the freedom to pick the topic of my talk, since at the time I was embarking on a new research adventure with fabulous people. Together with Matthias Hollick‘s team at TU Darmstadt, we were gaining interesting insights into the operation and flaws of wearables. I was also visiting Guevara Noubir at Northeastern University in Boston and was beginning to investigate the privacy risks posed by wearable devices. Such issued were resonating with increasingly more people, triggered perhaps in part by the revelations uncovered by Edward Snowden. I should also thank my partner for encouraging me to speak publicly about the discoveries I was making.
The overall experience of talking at the EIFS in April is well worth remembering. My hosts Prof. Bill Buchanan, Margus Lind (now my postgraduate student!), and the entire festival team were very kind, and the audience asked excellent questions that led to interesting discussions. They also inspired me to continue my work. In preparation of my talk, partially in airports and on flights, I scribbled some notes to make sure I would not ramble. In a nutshell I wanted to give the audience a good taste of what is currently possible to achieve in terms of surveillance with inexpensive hardware, but also what steps can be taken to preserve user privacy. I later thought I should not discard these notes and turn them into the blog post you are reading now. This should have seen the light months ago, but as the year is coming to an end, it’s not time to fret. So let’s get started.
Why this is a good time to discuss wearables privacy
Internet connected wearable devices have gained massive popularity over the recent years. This is because their cost continues to drop and users increasingly rely on them to improve the quality and efficiency of their daily lives. For instance a smart watch can be used today to answer calls or even pay for coffee. The gadget is effectively a small computer, that beyond time keeping functionality, has a touch screen and a set of sensors, runs a miniature operating system, and hosts applications ranging from text messaging tools, to calendars and music players.
The concept is not exactly new! 40+ years ago the first calculator watches with LED displays were launched to market by Pulsar, albeit for $4,000 per unit. They were obviously not Internet connected, since at the time the research community was only beginning work on packet switching and wireless data communication was merely limited to DARPA experiments in the San Francisco Bay area. Fast forward to present days, we now call wearable technology any electronic device that measures some parameter(s) and/or provides a limited function. These device may be worn as accessories or could be implanted in the body, while being in some way connected to the Internet. By far the most popular wearables today are wristbands that track user’s physical activity and sleep habits. Analysts estimate hundreds of millions of such devices will be sold by 2020 and in the long run there is hope they could help healthcare professionals make early diagnosis or monitor patients diagnosed with certain conditions.
Smart contact lens a the one pictured above are also under development, which aim to monitor blood glucose levels in patients who suffer from diabetes. The key feature is non-intrusiveness, as these lenses will be continuously measuring the chemical contents of a user’s tears and notify them via a smart phone app when they should take their insulin dosage (or even trigger a wireless insulin pump as needed!).
This is exciting, yet the media frequently reports major security vulnerabilities discovered in wearable devices, which raises legitimate concerns about their trustworthiness and privacy. For instance researchers demonstrated that the microphone and accelerometer of smart watches can be used to infer ATM PIN numbers. We also showed recently that the protocol governing the communication between fitness trackers and the cloud, can be compromised to leak private information.
It is thus only natural to ask what the root causes of privacy risks are and why we only ask these questions now. There are several reasons:
- First, these devices have rather limited computational capabilities, therefore implementing robust security mechanisms is hard. The user interface is also minimal, often consisting only of a simple button or a tiny display.
- Second, the system designs are negligent (or often beyond the vendor’s control once steps of the production process are outsourced) and in the rush to be first on the market, security remains an afterthought.
- Third, there is currently a shortage of developers who have intimate understanding of both embedded systems, wireless networking, and security.
I argue now is the right time to discuss privacy, since tens of million such gadgets are sold yearly, and their potential and societal impact can only be fulfilled if users not only recognise their utility, but also feel confident that they cannot be exploited for malicious purposes.
On wireless channels, anyone can hear you scream
Let us dive into some technical aspects, in order to better understand where privacy risks are rooted. The picture below summarises the typical end-to-end communication paradigm between wearables and the vendor’s application server.
Gadgets that collect measurements periodically send these over Bluetooth or Wi-Fi to the user’s smartphone or tablet. The mobile device then encapsulates activity reports into network packets and forwards these to a server over the Internet. Most frequently this is done via a wireless access network at home or at work. The server computes and stores statistics associated with individual user accounts, and sends back summaries to be displayed onto dashboards within apps that run on mobile devices.
The communication channel between the wearable device and the user’s phone or tablet is wireless and inherently broadcast. Therefore risks of eavesdropping and intercepting sensitive information exist. It also remains questionable what the mobile app does with the data collected by wearable devices. A malicious individual could further perform man-in-the-middle (MITM) attacks and intercept user data as it leaves the wireless access point. Nonetheless, if the server does not implement rigorous security, an attacker could impersonate a victim and retrieve information kept with their account.
Private information leaks aside, there are also risks of surveillance if an individual can be linked to a wireless device. In particular Wi-Fi devices periodically send broadcast probe requests to discover wireless APs within range. The unique hardware address identifying the sender is present in such probe packets. The address of a device is visible in clear also in data packets whose payload is encrypted. This brings the risk of identifying individual users and potentially social relations among them.
Things are slightly different when Bluetooth or Bluetooth Low Energy (BLE) is employed for communication. In particular, Bluetooth devices do not transmit on a single channel (frequency) as is the case with Wi-Fi. Instead, a communicating pair continuously hops on different channels according to a pseudo-random sequence upon which they agree beforehand. This makes packet sniffing considerable more challenging and one may tend to believe surveillance risk less critical as well. The majority of devices are however ‘discoverable’, which means they will respond to so called ‘inquiry’ messages, by which they reveal their hardware address. Some may become un-discoverable after the pairing process, though it may still be possible to intercept packets if changing channels intelligently or listening on multiple channels simultaneously, yet this remains an open research problem.
BLE was first marketed as ‘Bluetooth Smart’, which from a privacy perspective was misleading. This is because BLE employs dedicated channels on which devices periodically advertise their presence. Thus users may be susceptible to surveillance once again through the unique addresses of their devices.
With the right tools you can do (almost) anything
Of course one may ask how difficult it is to perform any of these attacks, whether expensive tools are required, or if compromising user privacy involves having solid expertise. Regrettably, such attacks are feasible with affordable open-source hardware and free software tools, particularly those developed for the Linux OS. This includes tools available with the official Linux Bluetooth stack, such as
hcitool, or the Ubertooth suite that can sniff data packets exchanged by BT/BLE devices within range, then extract part of their addresses.
Importantly, when the user’s smartphone is not in range, switched off, or simply in flight mode, wearable devices become easily discoverable. I put the open-source tools to work in such a scenario while on my way to Milan in March. Within less than one minute I was able to discover 13 devices, 4 of them clearly smart wristbands.
MITM attacks take more effort to mount. However, once the user accepts the certificate pushed by a rogue Wi-Fi router when connecting, information transmitted over HTTPS and thus presumably secured can be intercepted. If the victim initiates account pairing, the attacker can obtain information about the user’s mobile device type, the email address and password associated with the account, and potentially sensitive personal details such the user’s weight and body mass index (BMI).
If the victim synchronises the tracker with the server, for the case of devices that do not encrypt the payload end-to-end, we were able to personal details including distance travelled, calories consumed, and for how long the user has been active.
Lessons to be learned
By experimenting with open-source tools and reverse-engineering the operation of fitness trackers, we have been learning several important things.
Firstly, as long as devices continue to work with uniquely identifiable hardware addresses, users will be traceable as well. You may even imagine a crowdsourced service by which the location of an individual could be tracked if the address corresponding to one of their wearable devices is known. This is frequently encountered on the packaging of the device. The good news is that recent Bluetooth amendments enable manufacturers to periodically change the address of a device in order to improve privacy. This is not a mandatory feature, therefore at the moment only seldomly implemented. Address randomization is supported for Wi-Fi technology with Windows 10, Max OS X 8, and Android 6, while changing a NIC’s address has always been easy in Linux. There may circumstances where address randomisation is impractical, e.g. in corporate environments where the administrator expect devices to be always identifiable with the same addresses. The not so good news is that device fingerprinting by signatures built upon the distinct features of the radio signal these emit proves address randomisation may soon be insufficient.
Secondly, users should understand if connecting to a ‘free’ Wi-Fi hot-spot does not entail MITM risks. Nonetheless, users should always change the default passwords on the devices they manage, otherwise this is the perfect way to have your device recruited in a botnet (see Mirai).
Thirdly, dance like nobody’s watching, encrypt like everyone is. There is so much encryption one can implement without modifying device firmware, yet if connecting through Wi-Fi networks that you do not necessarily trust, set up a Virtual Private Network (VPN) first. If you plan to buy a wearable device, do some research first and understand which one might be more secure.
Privacy can improve only if everyone cares
It is only fair to ask in the end:
once privacy or security issues have been identified, who is responsible for patching the software and remedying these?
The manufacturers have in depth knowledge of the hardware and software
and for sure they should be the ones driving the effort of fixing security problems. However, more investment in training developers is needed and the code they produce should be rigorously verified. Large retailers could also prevent the release of devices with serious flaws, if performing penetration testing and working with the vendors well ahead of distribution. Nonetheless, if new software/firmware upgrades are being made available, the device owners should be performing the updates. The manufacturers have the duty of ensuring the update process does not appear complicated to users with limited technology knowledge and they should understand users often lack patience. But what what if a vendor goes into liquidation (as in the recent example with Jawbone) and thus patching vulnerabilities becomes impossible? That scenario is no longer hypothetical and something on which regulators together with vendors and researchers should work, so as to be avoided.