In January of 1946, George Orwell published a review of Yevgeny Zamyatin’s We, the dystopian novel that preceded and, some say, heavily influenced both 1984 and Aldous Huxley’s Brave New World.
Traces of We can certainly be found in both books. In Zamyatin’s lesser-known novel, the protagonist, a character named D-503, is a genius mathematician whose work for the “One State” includes the design and implementation of what are called “street membranes,” a camouflaged technology that records any and all conversation that takes place outside.
The resemblance to Orwell’s “telescreens” is immediately obvious. But where the inhabitants of We’s dystopia reside in a panopticon of smooth, glass towers where no visual privacy is afforded, Orwell brought the concept behind Zamyatin’s membrane technology up from the streets and into people’s living quarters, so that “Nothing was your own except the few cubic centimeters inside your skull.”
The concept of constant surveillance at the hands of big brother technology is something of a cliché at this point in history, and that’s both to the credit of this well-known literary tradition and the detriment of our way of thinking. The unfortunate irony is that cliché´´ has acquired the opprobrium of being superfluous, and important ideas are glossed over by the very virtue of their importance.
One can’t help but wonder if this is a contributing factor to society’s current complacence with technology that wouldn’t feel out of place in such novels. As the last decade has shown, the greater the technological innovation, the farther that technology reaches into the personal lives of its users. The smart speakers we use run our homes and even act as fashion consultants, wearable tech monitors our vitals, and phones with impressively capable eyes and ears almost never leave our sides. Even the idea of AI-run future cities isn’t that far off anymore.
Given the fact that, by some accounts, digital voice assistants alone are projected to outnumber the human population by 2024, it’s worth considering the risks, benefits, and history associated with devices that blur the line between convenience and liability.
Sirens find their voice
Once Apple introduced the first iPhone and kicked off the smartphone revolution in earnest in 2007, the race was on to fit increasingly smaller and smarter technologies into all aspects of consumers’ lives. This effectively led to the so-called Internet of Things (IoT), wherein tech companies sought to connect just about any device that could be turned on and off to the web. In mobile phones and everything else, hardware started to shrink behind smooth glass walls and beveled edges while the software running them kept getting better and better.
Fitting sensors and cameras into printers, refrigerators, and thermostats was no longer a wild science-fiction dream. Throughout the early 2000s, however, this technology was a one-way street. Tech interfaces were more coarse than they were elegant, and buttons and screens were used to carry out the commands punched in by their users. This was slowly beginning to change, however, as software capabilities began to catch up to corporate ambitions and consumer dreams regarding the ergonomics of these devices.
As The Verge notes, voice recognition technology was about to break onto the scene in a big way, resulting in a technological tipping point whose origins could be argued to go all the way back to the phonograph. Numerous machines and programs which attempted to transcribe natural speech had been developed throughout the 1970s and 1980s, but all of them were hindered by the phonological reality that language usually doesn’t emerge in easily digestible and separated chunks, but rather a constant stream of singular sound.
One significant development for voice recognition software came in 1997, when a company called Dragon Systems released an application called NaturallySpeaking. The program allowed users to speak into a microphone connected to their computer and watch their words appear on a document on screen—no pausing between words necessary. At the time, Fortune rightly described it as “a breakthrough in voice-recognition technology.”
Jump ahead just a few years to 2010 when Apple bought out the Siri voice assistant from Siri Inc. for a reported $200 million, according to TechCrunch, and launched it on the iPhone in the fall of 2011. Users were now having (somewhat stilted) verbal interactions with their phones. Amazon released the Amazon Echo, featuring the now famous Alexa voice assistant just four years later. People loved it. Bloomberg reports that the company sold a million of the devices in the 2015 holiday season alone.
“Having worked at Amazon, and having seen how they used people’s data, I knew I couldn’t trust them.”
Not everyone was elated, however, and media outlets published many a piece on the potential Orwellian nature of the technology. The conversation continues today as consumers and regulatory bodies ponder the nature of the trade-offs between modern convenience and privacy.
Siri-ous surveillance technology
Tinfoil hats aside, there are substantiated concerns over these technologies’ respect (or lack thereof) for consumer privacy. Tech giants like Google live and die by the data they collect from their users, which includes the videos you watch, what you search for, and everything you upload to the cloud. They also have a record of being less than forthcoming about where that data goes and who has access to it.
One particularly unsettling dataset these companies collect concerns your whereabouts. In 2017, Google was caught out for collecting location information on Android devices even when location services weren’t enabled, a practice it changed only after media outlets brought it to light. And, as USA Today reported when the story originally broke, Google actively reaches out to companies that target consumers based on their whereabouts through their advertising platform.
The consumer’s political leanings are another precious commodity in the digital world. In 2018, The Guardian reported on Facebook’s collaboration with Cambridge Analytica, an analytics firm that gathered, without permission, information from 50 million user profiles to build a system of personalized, politically-oriented advertisements. Facebook had been aware of the practice since 2015, but, in what has become an increasingly familiar pattern amongst tech giants, only chose to take action to correct it when the story broke.
One would hope that such events might inspire these companies to pay better heed to consumer privacy, but there is little evidence that this is the case. Just this year, The Washington Post revealed that even apps on the Apple App store that are billed as high-privacy, zero-data gathering programs may still be collecting information and passing it on to other companies.
Part of the problem stems from how Apple chooses to define a term like privacy, and how it balances that privacy against a very large financial incentive to shirk the ethics regarding the technology it produces or provides access to.
“The onus is up to the institutions that run the websites to cut back on the services that are tracking us and give us a chance to opt out.”
Smart speakers and their AI voice assistants are not exempt from consumer worries, either. In 2019, The Guardian covered a number of incidents involving Amazon’s Alexa, for example, including the device activating without hearing its “wake” words, and even sending audio conversations to complete strangers. Amazon asserts these incidents can be chalked up to human and technical errors that don’t amount to anything nefarious.
It’s difficult to take them at their word. Speaking with the publication, a former Amazon employee frames a common consumer concern rather bluntly: “Having worked at Amazon, and having seen how they used people’s data, I knew I couldn’t trust them.”
All of this makes the privacy assurances issued from these companies ring rather hollow. Many have noted that the only way to prevent companies like Google from tracking your every online move is to avoid using these companies’ products altogether. That is entirely impractical for the majority of us, especially in a post-pandemic landscape in which the focus of work and life overall has shifted even more heavily to the online space.
O Big Brother, where art thou?
A telling example of this comes from early 2019, when Gizmodo reporter Kashmir Hill experimented with cutting all Google services out of her personal and professional life to distance herself from privacy worries. The result? Just about everything she did became more difficult, from GPS navigation to basic online research and logging into online storage services like Dropbox.
It isn’t just convenience that went out the window, either. As Google’s online services are generally free, replacing them with others came at a financial cost that quickly accumulated. “If I stick with this,” Hill writes, “it will be a more costly way to live.” Many in society simply can’t afford those costs.
In an interview with Georgia State University at the end of 2020, Michael Landau, law professor and co-director of the Center for Intellectual Property at Georgia State Law expressed a similar frustration:
“A year ago, I would have said, just don’t use them. Your life won’t end if you’re not on Facebook; your life won’t end if you’re not on Zoom or Webex […] But now you have to be. So, I would say that the onus is up to the institutions that run the websites to cut back on the services that are tracking us and give us a chance to opt out.”
“If corporations and governments start harvesting our biometric data en masse, they can get to know us far better than we know ourselves.”
Big tech companies are well aware of the fact that the services they provide are simply too good or too economically incentivized to pass up, and they are only getting better at taking advantage of this.
Healthcare is just one industry set to change drastically in the coming decade as AI and big data begin to further meld with institutions in that field. In fact, companies like Apple, Microsoft, Google, Facebook, and others are so certain that this change is coming that they’ve started battling to acquire talented AI startups in recent years.
Health-based phone apps already gather data on sleep and workout routines while wearable tech such as the Apple Watch can collect a host of information, including blood oxygen levels and hand-washing habits. Such information is as sensitive as it gets, and while Apple, for example, promises it keeps this data encrypted on the devices themselves, it’s hard to take the company’s word for it.
Combine that with the likelihood of AI getting so good that it really can lead to healthier lifestyles, earlier and more accurate diagnoses, and even improved cancer detection, and the current incentives consumers have regarding simple convenience will seem paltry in comparison. The false choice of health versus privacy is just that, but it will be a choice we’re confronted with in the not-so-distant future.
It’s crucial to understand that, however much convenience—even life-saving convenience—a technology offers, its trade-offs increase correspondingly. Writing in The Financial Times, historian Yuval Noah Harari illustrates just how precious information regarding health really is:
“If corporations and governments start harvesting our biometric data en masse, they can get to know us far better than we know ourselves, and they can then not just predict our feelings but also manipulate our feelings and sell us anything they want—be it a product or a politician. Biometric monitoring would make Cambridge Analytica’s data hacking tactics look like something from the Stone Age.”
“The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology.”
And health has been the focus of every major governmental institution in the world since early 2020. The idea of a government spying on its citizens aligns much more directly with classic dystopian fears, and those aren’t exactly unfounded. The COVID-19 pandemic has shown itself to be the perfect debating grounds regarding the relinquishing of privacy for the greater good.
Business Insider recently reported on how numerous world governments are attempting to combat the pandemic through an increased use of potentially invasive technologies, most of which involve apps that track citizens’ locations, sometimes to unsettlingly accurate degrees.
Despite the rather haphazard approach to the data privacy importance of their customers, tech companies aren’t always eager to comply with government requests to obtain similar information, however. Following the deaths of three people at the hands of a mass shooter at a naval base in Pensacola, Florida in late 2019, the FBI requested Apple’s assistance in unlocking the shooter’s iPhone. Much to the FBI’s chagrin, however, Apple refused to unlock the phone, reinvigorating a vital discussion on the limits of big brother surveillance, even in extreme cases.
Such moral grandstanding is likely to be the exception rather than the rule, however, especially in countries whose governments have historically held their citizens’ privacy in low regard. Harari, for one, invites us to consider what a country like North Korea might look like if it required its citizens to wear biometric bracelets round-the-clock. If technology can be used to monitor what fevers and illness look like, it can be used to identify what certain emotions and behaviors look like.
And that’s just the stomach-turning issue. Humanity seems to have built itself a fantastically shiny, sea-worthy vessel with a barely-working knowledge of the weather conditions it’s heading into and how the steering apparatus works. Sociobiologist Edward O. Wilson perhaps most succinctly summed up this innovation-first, ethically catch up later attitude at a debate at the Harvard Museum of Natural History in 2009:
“The real problem of humanity is the following: we have Paleolithic emotions, medieval institutions, and god-like technology.”
It’s unlikely that those emotions will catch up to our ability to innovate any time soon. For now, we might do better at keeping an eye on the institutions that are perhaps too quick to place themselves on the same level as the technology they create.