Rethinking Listening Devices

A recurring truth of the future of technology is that it sneaks up on us unawares. Technology is adopted before we’ve all had time to consider its consequences, the potential harms that might come. This is certainly true of the rapid spread and normalization of digital voice assistants in our homes. This blog post covers a little background on that particular technology, why we choose to call them “listening devices”, and introduces a small project we are launching to interrogate our relationships with these devices as they move into our homes.

A “Normal” Technology

These listening devices are everywhere. If you don’t have one in your home, the odds are good that you’ve been to another home that does. Home “smart speakers” have become surprisingly ubiquitous. In January of 2020, it is predicted that there is almost 90 million of these listening devices in people’s homes in America alone. The majority of these units are bought for homes that did not previously have a listening device, which has brought the user base up to 34.4% of American adults. This rise in popularity is frequently compared to the rise of smart thermostats, and this comparison can actually help us understand exactly what these listening devices are doing in our homes.

Google bought the smart thermostat company Nest in early 2014 for a whopping 3.2 billion dollars. For a while, it wasn’t clear why a search engine company would purchase a home appliance company, but as the years have gone-by it has made more sense. Google isn’t just a search engine, it’s one of the biggest collectors of data in the world. According to Google’s own privacy policy, they collect and store data about your home that is collected by your Nest devices. This data includes temperature, humidity, power usage, and even video from the Nest cameras. So if this is all being done by devices that are meant to regulate temperature, one is left wondering what devices intended to listen to you are doing.

smart speakers of listening devices? It’s all in a name. (Icons via the Noun Project: smart By Nithinan Tatah, T, Speaker By wira wianda, listening By bambaleq, device By DinosoftLab)

As they have spread, we’ve come to call these “smart speakers”. This adoption softens the blow by leveraging two well-known technological descriptors. “Smart” comes from the world of “smart phones” — devices that have overtaken the world and are global norms of everyday life. “Speakers” are devices that we’ve co-existed with for almost a century, bringing us joy through music, information from radio news, and more. How can a “smart speaker” be a threat? But there is alternative we could use that accurately describes their function. We choose to call these devices “listening devices” because it activates the role they play in our lives. They “listen”, actively observing what they hear. They are “devices”, designed based on a plan to achieve some aim, doing far more than simply speaking. We very intentionally call these things “listening devices” to provoke a more uncomfortable response.

A Very Real Privacy Risk

In order to record you, a listening device is supposed to hear a starting command like “ok, Google” or “hey Alexa”. However, in [2019 VRT NWS])https://www.vrt.be/vrtnws/en/2019/07/10/google-employees-are-eavesdropping-even-in-flemish-living-rooms/) (a Belgian news network) was given access to over a thousand of these recordings from a firm subcontracted by Google, and they found that many of these recordings were not preceded by “ok, Google”. A hundred and fifty-three of them didn’t even contain anything that sounded close to that phrase. It is also very common to ask your listening device medical questions, and VRT NWS found that the Google subcontractors have heard those questions as well. The Guardian published a very similar exposé about Apple’s Siri service last year. And let’s not forget about Amazon’s Alexa as well. The technical goal from these companies’ perspective is to use manual review to improve the device’s speech and phrase recognition, however the subsequent media storm clearly indicated that there was a disconnect between the public’s expectations and internal Silicon Valley ethical norms.

We need more investigation to understand exactly what these devices are doing, and research to question the potential implications. For example, the Northeastern Mon(iot)r Research Group dug into the question of how often devices are listening. They found TV shows do trigger numerous activations that record audio in the cloud. Read their “When Speakers are All Ears” study key findings for details. We strongly believe that this is exactly the type of thorough work we need from the academic community to help drive critical reflection, technology development, and policy responses. While this work begins to surface of the different ways these devices could be breaching our privacy, people give little thought when they invite one of these listeners into their homes. There are numerous incidents of these recordings attempting to be used against people in legal settings; it is still unclear just how often to us that is happening or how successful it is. However, at a high level it is clear that our collective desire to use these devices helpful features far outweighs our concern over potential harms.

A Design Problem

These devices are objects we place in our. What about their form? Does it offer clues about its function? Most of these home listening devices are blobs or obelisks that are meant to blend-in with a modern home; they fade into the background so all that is left is an omnipresent voice that responds when called-for. We know that these devices are listening to us, yet their design gives no strong indication that they can hear us. They universally adhere to the “Scandinavian sleek” aesthetic that characterizes our time.

lineup of devices A variety of “smart speakers” — cylinders and orbs (source: mon(iot)r research group

Our theory is that this approach to fading away is completely intentional. Time and time again we’ve seen Silicon Valley draw inspiration from norms in sci-fi literature. The “talking home” is a recurring narrative. Hal, the Starship Enterprise — these are just two of the most popular examples of speaking homes, where the intelligent AI system simply fades away into your surroundings. But this narrative design trick impedes any critical assessment of the technology because it is “out of sight, out of mind”.

This is a problem design can address. How can we design a listening device that prompts the user to think critically about its use? One approach would be to pull from a different sci-fi narrative trope — the helpful assistant robot! R2D2, Rosie from the Jetsons, the robot from Lost in Space. These are mobile characters that physically embody the computational assistant, even though they are not cloud-connected as our technology today has become.

robot from Lost In Space the tv show That delightful robot was trying to warm us the whole time! (source)

An Alternate Path Forward

We are planning to build something that bucks the trend, attempting to be transparent about its role as a large set of ears in your home! Our goal is to redesign the listening device to support critical reflection. In order to accomplish This, we are going to have to design and build a listening device that truly lives-up to its name and looks like nothing else on the market. We have a few ideas about how to do this, and a few sketches for potential designs that do this.

This project builds on work from a variety of researchers, technologists, and artists. Karmann and Knudsen’s Project Alias is a “teachable “parasite” that is designed to give users more control over their smart assistants.” Sitting on top of a device, it plays the “man in the middle”, actively suppressing or altering the device’s ability to listen. It is an aggressively contestational design. Rubez Chong explored design interventions inspired by camp aesthetics to offer alternative objects that “hypervisiblize surveillance,” within a larger critique of surveillance technology and societal norms. The Algorithmic Justice League’s Drag Vs. AI project offers a withering take on the rapid adoption of facial recognition technologies by leveraging the popular culture power of drag makeup to confuse the algorithms. These projects are just a handful of those that inspire our thinking and offer us a design language to spark critical reflection on the technologies around us.

So what direction do we plan to go? One obvious core idea is that the device should give the user feedback while it is listening through visual or audio clues. Another way to draw attention to the device listening to us leverages those characters from sci-fi mentioned earlier. We can personify it to be friendly, attentive, or even scary. This is all in sketching and device prototyping now, so if you are interested in working with us on this project or know of related work please feel free to reach-out to us.

sketches of potential device designs Listening device concepts and form inspiration (by Eric Lord)

Eric Lord
Eric Lord

Eric Lord is an industrial designer from South Florida. He received his bachelor of science degree from Virginia Tech's College of Architecture and Urban Studies in 2019 and is currently working towards his Masters in Experience Design at Northeastern University.  His primary interests lie in consumer products, kitchenware, and footwear.

🌎 - https://lorderic.myportfolio.com on the web

Rahul Bhargava
Rahul Bhargava

Assistant Professor, Journalism and Art + Design, Northeastern University

📨 - r.bhargava@northeastern.edu on email
🐦 - @rahulbot on Twitter
🌎 - https://rahulbotics.com on the web

Filed Under » internet-of-things, design

Northeastern University