The Misuse of Surveillance Tech for Abuse: The Case of Apple’s AirTags

 

There are many legitimate reasons why technologies that use your location to facilitate surveillance, such as through GPS tracking, should be developed and used. We might think of any number of apps that we are accustomed to use to track our movements when getting from point A to B. Whether it’s Maps, Waze, Strava, or AllTrails, location-based apps enable a form of daily surveillance at which most of us don’t bat an eyelid. The past couple of years have seen an uptick in the use of surveillance technology for practical and personal tracking. For example, Apple introduced its AirTag technology in April 2021 as a super easy way to keep track of your stuff. How? The AirTag, a small, button-shaped tracking device sends out Bluetooth signals that can be received by a user’s network of Apple products – whether an iPhone or laptop – to alert them of the precise location of their AirTag. By attaching the innocuous looking device to your keys or putting one in your backpack, the idea is to ensure you can constantly keep track of your belongings.

Attaching tiny trackers to your valuables is not an idea invented by Apple. Before the AirTag came Bluetooth tracker Tile, a similar tracker of lost items that was acquired in 2021 by family location sharing app Life360. A marriage between apps like Tile and Life360 – which has been marketed as a way for parents to use tracking capabilities to keep children safe – signals these kinds of mini tracking devices are not just being used to monitor things, but also people.

Indeed, just this week, the Wall Street Journal reported that caregivers are planting Apple’s AirTags in the wallets and on the keys of people with dementia to keep tabs on their locations. Despite Apple clarifying that the AirTag is meant to track property, not people, it’s clear that the technology is nonetheless being used for this aim. While tracking dementia patients may seem ethically justifiable in spite of privacy concerns, there is an even darker – and wholly unjustifiable – side to how AirTags are being used to track people. The tools, created to track items that might be lost and stolen, are also being planted on victims of stalking.

The repurposing of AirTags as stalkerware is one of many innocuous technologies that have been re-purposed for gender-based abuse. Although stalking is a gender-neutral crime in theory, in practice, 78% of stalking victims end up being female. In the U.S, over 1 million women and 370,990 men are estimated to be stalked annually. Moreover, the use of AirTags to stalk and track victims also facilitates intimate partner abuse, which is often related to stalking. For example, 60.8% of female victims of stalking and 43.5% of male victims reported being stalked by a current or former intimate partner. Recent evidence from law enforcement suggests that women are, indeed, reporting being tracked using AirTags by current and ex partners, husbands, bosses, and other individuals that they knew intimately or otherwise.  

Apple has condemned such possible malicious uses of their AirTags. It even implemented built-in protections to discourage unwanted tracking. These protections included a notification system to alert you if someone has placed an unrecognized AirTag on your person. However, many remained concerned that this notification system was only available to iPhone users, leaving Android users, for example, unaware that they could be the victims of AirTag surveillance. In response to complaints from experts like Eva Galperin, who has pioneered work on stalkerware and digital privacy for victims of abuse, Apple launched Android app Tracker Detect. Tracker Detect was able to alert Android users if AirTags were recording their movements within Apple’s Find My network.

It is encouraging to see the impact that activists and experts like Galperin, as well as organizations like the Electronic Frontier Foundation or the Coalition against Stalkerware, has achieved in bolstering the safety of Apple’s AirTags. While Apple originally designed the AirTags to make a noise if they were separated from their owner after three days, Galperin, among others, highlighted that this design choice still enabled “three free days of stalking, followed by a warning sound that may be easily missed.” Marking another success for activists concerned about stalkerware, Apple recently unveiled an update to their AirTags that ensures this warning sound is played at a random time within a window of anywhere between 8 and 24 hours after separation from its owner. Apple also recently reaffirmed its commitment to work with law enforcement to address instances of AirTag misuse.

Changes like this are an important step towards ensuring privacy protections for surveillance technologies are designed with the victims most vulnerable to their abuse in mind. But initiating privacy by design will be just one way of ensuring future surveillance technologies are designed with the worst-case scenarios of their misuse in mind. In particular, new tracking technologies should be developed in consultation with stakeholders, experts and activists that are equipped to apprehend the needs of those for whom the tracking of their personhood poses grave risks.

Finally, technologists must be accountable for releasing technologies without first considering the possibility that they could be wielded for abusive purposes. Increasingly, regulatory proposals are being developed and passed that would require developers, deployers, and procurers of AI systems to assess intended benefits, harms, and impacts of various technologies through impact assessments and risk management tools. As standards for assessment and auditing of technologies continue to emerge, we might expect a greater degree of transparency around the risk assessment and mitigation strategies that underpin surveillance technologies like Apple’s AirTags.

 
Previous
Previous

What Is AI Doing to The Planet?

Next
Next

Student Testing AI Proctorio Fails When Put to the Test