20.3 C
London
Thursday, April 9, 2026
Home books How our digital devices are putting our right to privacy at risk
how-our-digital-devices-are-putting-our-right-to-privacy-at-risk
How our digital devices are putting our right to privacy at risk

How our digital devices are putting our right to privacy at risk

3
0

We live in a digitally connected world that has brought undeniable personal benefits. I can barely recall the pre-Google Maps era, but it was far less convenient to navigate unfamiliar places without a Siri-enabled smart phone (and/or Apple Car Play). We use fitness tracking apps, our home appliances are increasingly digitally connected, and many homes have security systems like Nest cameras or home assistants like Alexa or Amazon Echo. But what are we giving up for all this digital convenience? We are creating a huge amount of private personal data on a daily basis and yet, legally, it’s unclear when and how that data can be turned against us by law enforcement and the judicial system.

George Washington University law professor Andrew Guthrie Ferguson tackles that knotty question in his new book, Your Data Will Be Used Against You: Policing in the Age of Self-Surveillance. Ferguson is an expert on the emergence of new surveillance technologies, policing, and criminal justice. His 2018 book, The Rise of Big Data Policing, covered the first real experiments with data-driven policing, predictive policing, and what were then new forms of camera surveillance. For this latest work, Ferguson wanted to focus specifically on what he calls self-surveillance: how the data we create potentially exposes us to incrimination, because there are so few laws in place to regulate how police and prosecutors can access and use that data.

“I liken this sort of police-driven self-surveillance to democratically mediated self-surveillance,” Ferguson told Ars. “It’s still self-surveillance with our tax dollars and everything else, but we are also creating nets of smart devices and surveillance devices in our homes, in our cars, in our worlds. And I don’t think we’ve really processed how all of that information is available as evidence and can be used against us for good or bad, depending on the sort of political wins and whims of who’s in charge. We’re seeing today how that vulnerability can be weaponized by a government that wants to use it.”

Ars caught up with Ferguson to learn more.

Ars Technica: You open with an anecdote of asking students how many use the Google Maps app, and they all raise their hands. That’s true for most of us. We rely heavily on these tools now. 

Andrew Guthrie Ferguson: I don’t want the book to be a scolding book, or say you shouldn’t have a Ring doorbell camera on your front door and you shouldn’t have an Echo in your home. I want people to just see the duality of data: smart devices are surveillance devices and you are literally purchasing something to surveil you. You think that the cost and benefits work in your favor, but let’s make sure you got that calculation right. Maybe you keep that same calculation, or maybe you see the vulnerability.

There are certain groups of people that have always been targeted by police surveillance. The subtitle of my first book was Involving Race and Policing, and it still is a reality. But the aperture of surveillance has expanded to cover people who are ordinarily more privileged. Now I think everyone is starting to see, “Wait a minute, this data on my doorbell camera in my home, on my email, could be used if I suddenly become the person that the government wants to target.” That could be protesters, dissenters, journalists, scientists, you name it. People are now seeing the vulnerability of a world that wasn’t really based on laws; it was based on norms of prosecutorial decision and discretion, and that’s sort of fallen away.

Ars Technica: Is our justice system prepared to grapple with the implications of all this data we’re collecting ourselves, particularly when it comes to interpretations of the Fourth Amendment? 

Andrew Guthrie Ferguson: I’m a law professor and I teach the Fourth Amendment. I teach constitutional criminal procedure every year. So I think about it a lot. I think about the intersection of new technology and old law. We have this document ratified in 1791 that says that the government cannot unreasonably search or seize our persons, papers, homes, or effects. We need to give new life to that in a digital age and a world where the exposure that we face is dramatically different than it was at that time. Yet some of the same principles are very real. The founding fathers were concerned about a power of general rummaging, where customs agents can go into your home and see what you’re doing, if you were writing treasonous missives against the king. They were concerned about giving the government that power because they knew it would be abused.

So some of the old ideas of the Fourth Amendment have a new resonance in this modern age, and we’re watching the courts trying to adapt essentially an analog set of laws to new technologies. When I teach criminal procedure, I have to teach old technologies. One of the famous cases involving the third-party doctrine involves microfiche in a bank. You have to tell students what microfiche is and why that was important. The seminal case on the Fourth Amendment reasonable expectation of privacy came from a case where the FBI was surveilling a payphone—and I have to explain what a payphone it is—[with] a reel-to-reel tape recorder that they literally placed on top of the physical phone booth.

That technology feels so old-fashioned when you’re thinking about cell signals everywhere, sensors in every city, and yet the law that we’re applying comes from that 1967 case, and that creates the tension, creates the conflict, and it creates the need for an update. Those technologies have just been modernized. We just have better data to be able to watch and see what’s going on.

head shot of a white man with glasses and sandy hair, in side profile.

book cover art:white text om a dark black background featuring lines of code

Ars Technica: Even though there are privacy concerns, things like CODIS and fingerprint databases have helped solve crimes. How did we deal with that? And can those same lessons be applied as we move into more disturbing things like facial recognition and AI?

Andrew Guthrie Ferguson: Facial recognition and AI can help solve crimes, even if you have to take the fact that there have been a series of false positives and false arrests that have also created problems. This isn’t a story that data is bad. It’s not a story that self-surveillance is bad. There are cases that I think we would all agree we would want law enforcement to use this data to solve particular crimes.

At the same time, the current default is, if we have created the data, it is basically available to law enforcement with a warrant and many times without a warrant. That might not be the default we want. The protective measures that give a lot of power and discretion to police about our most intimate data is probably not enough to have that balance. What I want is for people to come up with rules that balance out where we want to let police have that information, what steps they have to go through. Maybe make it a bit harder to get some of the information, not necessarily preclude them from getting that. Right now, it’s pretty easy to obtain your Google searches. There’s an argument you don’t even need a warrant to get access to that. How do we come up with systems to balance that need and also recognize the real risks of giving the government that kind of power?

Ars Technica: You mentioned Google’s three-step warranty process as an example of a company trying to be responsible about what it does with its users’ data.

Andrew Guthrie Ferguson: Basically anyone with a Google-enabled device, be it a Google phone or Google Maps or Gmail on your phone, was for a period of time being tracked in what they called the sensor vault—basically a location database of all of us. You and me, everyone else was being tracked. When police wanted to find out information about who was in a particular location, like if a bank was robbed, they would go through this three-step process to be able to obtain a warrant, and it would narrow it down—the phones and numbers—to the revealing identification.

That process was not mandated by the Supreme Court or Congress. It was created by a bunch of Google’s lawyers who thought this was a balance that respected the privacy issues of their customers. There’s an open question about whether a warrant is required at all. I say open because that case is going to be before the Supreme Court in April, and they’re going to decide whether or not the police need a warrant to get access to this information. Google filed an amicus brief saying that this was a search and warrants were required. There are judges on the Fourth Circuit Court of Appeals that heard the case that’s now up before the Supreme Court that said no warrant is required.

If you purposely knew you were giving your data to Google, who are you to say this violated an expectation of privacy such that the government needs a warrant? Then there are privacy advocates, including myself, who argue that at a minimum there should be a warrant to get this kind of sensitive data. And maybe even a warrant isn’t going to be protection enough because really, what you’re doing is a huge search against 500 million phones every time you want to use the equivalent of the sensor vault, and that might be too general to actually survive Fourth Amendment scrutiny.

So this wasn’t a requirement of the government or the law. It was a corporate decision that could be changed tomorrow. That issue is still unresolved and now going up to the Supreme Court. If the Supreme Court says it’s not a search, it means all of our data and all of these locational reveals—Google is just one example—is just available to police because we created it.

Ars Technica: You can say that we chose to give them our data, and maybe at first we did, but at this point we live in a digital society and we really can’t opt out. Does that have any legal bearing?

Andrew Guthrie Ferguson: It does. I think that has been convincing to the court. There was a case about whether when you were arrested and you had a smartphone on you, whether police could simply search it without a warrant. The prior law said they could. They could search your purse, they can search your wallet, they can search your clothes, they can search your briefcase. Well, why wouldn’t they be able to search your phone? The court said, “No, digital is different. There’s too much revealing information.” You have to go to a judge and get a warrant. But there are definitely judges who have said, “You did consent. You literally checked the box that said I consent, and in doing so, you forfeit any Fourth Amendment right.”

The case in the book that is the most revealing involves the smart pacemaker case. There’s this guy who has a smart pacemaker and it keeps him alive by tracking his heart. The data is also going to his doctor. So detectives go to the doctor’s office with a warrant and get the heartbeat data to use it against the guy in a court of law. Why? Because apparently he was committing insurance fraud claiming his house burned down, but really it was arson. The detectives recognized that his heartbeat would disprove his story of running around trying to rescue all of his worldly belongings.

It’s a criminal case. The detectives aren’t necessarily doing anything wrong. They’re trying to investigate and stop someone from benefiting when they shouldn’t. At the same time, you have a pacemaker that is keeping someone alive. It’s the kind of innovation we really want to promote. It’s really hard to say this is a choice to have your heart continue. I guess you don’t have to have a smart pacemaker, you could die, but that’s not really a choice. Yet the current rule would be, since you created it, it is available to police, at least with a warrant, and arguably, depending on the kind of pacemaker you have, maybe without a warrant.

Is it a choice to have a smart pacemaker? Yes. Is it a choice that it seems like you’re forfeiting your privacy rights over your own heartbeat data? No. Is there a law that sorts that out for us? Definitely not, which is why I wrote the book—to get people thinking about this because this happens whether it’s your smart pacemaker, your period app, your smart toothbrush, whatever it is that you’re using to improve your life, the fact that that data is largely unprotected is a problem.

Ars Technica: You draw on how the Founders framed the Fourth Amendment for insight into possible solutions. What could be done?

Andrew Guthrie Ferguson: There’s a whole chapter on judicial solutions. If you’re a judge interpreting the Fourth Amendment, you can expand it beyond its analog age, thinking about why we might have an expectation of privacy with our smart device in our homes, even when connected to some third-party service provider. We still might want to protect that. One could craft that. Courts are split different ways about the protections of even things coming from our home. Same with your heartbeat and your location data. It’s not clear what the court will do in terms of location data.

There’s a different theory. One of the real harms that the Fourth Amendment was designed to protect against was this idea of rummaging, that British agents would be going into your home and rummaging through your goods and services, probably because you weren’t paying taxes on it, which was illegal. Or you were writing seditious treasonous letters complaining about the king. That’s really why we had the Fourth Amendment. Digital rummaging tests would protect against this idea of over-broad and expansive surveillance.

There are also legislative fixes. One of the realities we forget about is right now, today, the FBI could put a microphone in your living room and listen under a wiretap. The Wiretap Act was and is a technology that’s about as invasive as you can imagine, and yet we aren’t complaining about it. The reason we’re not complaining is we have procedures in place. In order to get a wiretap, you have to go to an actual Article 3 judge. You have to explain that there’s no other way to get this information. It has to be for a very serious crime. And you have to report back to the judge about what happened and what you did with the information.

My argument is we can adopt that same higher standard to other forms of technology. If you want to get Ring doorbell camera data, or location data from Google, maybe an ordinary warrant isn’t enough. You could have a legislative model that sits higher than the constitutional floor. It presumes a Congress that wants to legislate anything. But if you’re a sitting senator, your data could easily be used against you. Not only is it a bipartisan issue, it is an issue that politicians should selfishly pick up on because it would protect them. Whoever is in charge of the government will probably weaponize data against their opponent, and we should in a bipartisan fashion limit that so it doesn’t get abused the next time—because the power will change.

Ars Technica: This is what you call your tyranny test.

Andrew Guthrie Ferguson: One of the times I feel this is most resonant, if you remember a couple years ago, people who are stalwarts of the Second Amendment were really worried that a Democratic government was going to come up with a federal list of all the gun owners in America. So at some moment there’d be a knock on the door and their guns would be confiscated. Well, with automated license plate readers, if you put those outside of the gun range and the gun show and where you buy your ammunition, you have your list.

You don’t need to have some government list. You literally, through the data that you give up by just living your life, you can show who has a gun in their home. That might worry people who believe the Second Amendment means that they should be able to have their own gun rights without the government necessarily knowing it, but it cuts both ways. Everyone is revealed. Everyone is exposed. And everyone should be worried about their government having that data that potentially could be used against them.

Ars Technica: What is the potential scenario that worries you the most as we move forward? As AI tools in particular keep developing, are we moving into even more serious uncharted waters?

Andrew Guthrie Ferguson: Yes. I think AI is going to supercharge police power in ways we’ve never seen. We’ve all lived in a world where we have cameras on our streets. If there was an individual problem, they could go back into an individual camera. The idea that all those cameras can be fused together in a central command center, like a real-time crime center, and AI video analytics can then observe every single object, foreground and background, identify man, woman, child, cat, door, car, what kind of car, and then track those objects throughout the city—that’s a whole new power that we’ve never had before.

There are currently no real rules limiting that. There used to be a cautiousness in rolling out new technology. But we are watching the federal government in the guise of immigration enforcement actually use all the technology that does exist, but without any guardrails or concerns. So for the first time we’ve seen mobile facial recognition in the wild. It’s not that that technology wasn’t theoretically around, but we did not see local police taking a smartphone camera out for facial recognition. We’re now seeing that with ICE and Customs Border Patrol (CBP). We knew that you could track individuals with location and social network analysis. But now we’re seeing that being used by ICE to identify the areas and people that they want to target, using the power of these new systems in ways that we haven’t seen before in local law enforcement.

The thing I want people to pay attention to is that there’s nothing stopping ICE and CPD from doing that kind of work, but there’s also nothing stopping your local law enforcement from doing that. The only difference was political will; they thought there might be a backlash if suddenly police were just randomly using a facial recognition scan against ordinary people on their way to work. It could happen. There’s no law that says it can’t happen now.

I wrote this book fearing that this might happen, and then it happened, which is problematic for the world. But it is so much easier to get people to see the danger now because literally it’s happening on the streets in front of us in a way that it hadn’t been a year ago when I was doing the final edits.

Ars Technica: Some people might think they have nothing to fear because they haven’t done anything wrong and have nothing to hide. You point out that being innocent does not necessarily protect you. 

Andrew Guthrie Ferguson: What is being innocent? If you are a woman and you find yourself pregnant in Texas or Idaho and you Google first signs of pregnancy or abortion services, you are now risking the power of the state to investigate your digital trails. When you text your mom or your best friend for help or you drive from Texas to a different state, your license plate is being captured by cameras. If you’re going out and protesting with a No Kings protest sign, you walk past your own Ring doorbell camera that surveils you far more than it surveils anyone else. If that becomes criminalized, you are now part of a group of people that’s supposedly interfering with law enforcement’s role in immigration or protesting the existing government.

We’re starting to see how, when the norms are gone and criminal law can be used against you, everyone is at risk. Jim Comey is probably one of the straightest arrows you can imagine, former director of the FBI, career prosecutor. He was facing criminal charges, and his data was used against him. Why? Because he did something as part of his job. If Jim Comey can be targeted, anyone can be targeted. At the same time, when someone does something horrible and harms a loved one, of course you want the government and the police to do what they can do. The problem is we have no way in the current law to differentiate when police can get access to the data and when they cannot.

Ars Technica: So what can we do individually? This seems like such a huge problem. The solutions require legislation and laws and judges and courts and we are unable to fully unplug. There’s no way to not be exposed.

Andrew Guthrie Ferguson: I want to live in a world where we can have these consumer conveniences and smart pacemakers and Echo devices, but not worry that that data could be used against us by our government. It can probably be used against us by Google or Amazon, but not by our government. So I think framing this as what individuals can do is actually very hard, because you and I can’t negotiate with Amazon or the FBI. But collectively we can push back on the growth of these technologies. We’re seeing that with community groups protesting Flock cameras or ShotSpotter cameras or certain kinds of police uses, even drones. There’s a whole chapter in the book about the ways to support legislators who actually care about this.

Support your local journalists because in some ways this book could not have been written but for the exposés that journalists have made their career doing, constant reporting, because there are always problems with this technology. But it’s also about educating ourselves. We are all living in this world where we have accepted technology in our lives. We think we’re being smarter and we don’t see the duality to that smartness is surveillance. I think we need to push our legislators to act. I think we need to push our judges to act. We can make some individual choices about what we do, but I don’t necessarily want to live in a world where we just can’t have the technology. I want to have rules about how people can use the technology against us.

Ars Technica: Are you hopeful or realistic or pessimistic about the possibility of getting these new constraints in place?

Andrew Guthrie Ferguson: I think if the message gets out that everyone is equally vulnerable and that these protections apply across the board, there could be some bipartisan recognition and a rule set put out there. It might not, after a full debate, be exactly as privacy protective as I would want, but I’d like us to have that debate. Do we really want to have anything we create be possibly used against us? Is there no limit? There’s literally nothing too private: your diary to yourself, your period app, your smart pad. If you create the data, with a warrant and sometimes without it, the government can get access.

I don’t think that’s the world we want to live in.