Please be aware that if your spoken words include personal or other sensitive information, that information will be among the data captured and transmitted to a third party through your use of Voice Recognition.
The comparisons to 1984‘s two-way « telescreens » are straightforward. This kind of language suggests that while you’re watching TV, Big Brother may be watching (or listening to) you. Samsung has taken to its blog for an explanation and edited the policy, but that has not assuaged everybody’s concerns.
To be clear, there are no reports of the captured audio being abused or misused. Samsung has also responded to news reports about the policy by stating it adheres to « industry-standard security safeguards and practices, including data encryption, to secure consumers’ personal information and prevent unauthorized collection or use. »
Still, users are left with little way to verify those security promises, and—as is unfortunately all too common with online services and networked devices—no say in the terms of any tradeoff between privacy and convenience. It may be worthwhile for voice recognition to be performed by a third party instead of on the device itself, but users don’t get a say.
One critical way to address these problems is to make it easier for users, or people acting on behalf of the users, to conduct security research on smart device software, and to develop alternate or modified firmware that fixes vulnerabilities and acts more faithfully on behalf of users. But those activities are under a cloud of legal uncertainty, thanks to Section 1201 of the Digital Millennium Copyright Act. Section 1201 ostensibly exists to give legal backing to the DRM software that wraps up digital media, but—as 16 years of unintended consequences demonstrate—it actually strips users of control over the technology they’ve lawfully purchased, and unfairly limits competition.
Every three years, groups can propose temporary exemptions to Section 1201 in a time- and labor-intensive rulemaking process run by the Copyright Office. (EFF is proposing six classes of exemptions.) The Software Freedom Conservancy, in conjunction with the law firm Tor Ekeland, P.C., has proposed an exemption to allow users to install alternate software on their own smart TVs, without the authorization of the manufacturer.
Security issues on these devices are real. To pick just one example, a group of researchers have demonstrated a vulnerability that allowed the Samsung SmartTV camera to be hijacked to send its data directly to the attackers. Later, those same researchers brought smart TVs to the kids’ section of the Def Con hacker gathering, where a twelve-year-old hacker called Cy-Fi discovered and reported a Samsung SmartTV vulnerability and received a $1000 bounty.
The urgency of this issue is obvious: smart TVs sit in your living room or bedroom, and can have microphones, cameras, and access to your TV-watching habits—which can produce incredibly personal data. If security researchers can’t examine the software these devices run, and developers can’t work on alternatives or modifications, then users are bound by whatever terms their manufacturers want to put forward, and must trust that they’ve been implemented as promised.
Given that these devices are networked and can often be updated remotely, user privacy is at the mercy of not just the manufacturer, but anybody who can convince, coerce, or compromise it, to modify the software or collect additional information.
The stakes only get higher as more of our devices go online and collect more information about us. That a bloated extension to copyright law could jeopardize our ability to own and control these devices is a major problem—and it gets more and more severe each day.