BROOKLYN LAW NOTES
Fall 2017


How the Internet of Things challenges our understanding of privacy, security, and ownership

By Professor Christina Mulligan

In the 1990s and early 2000s, technology enthusiasts wondered how the rapidly growing Internet would change everyday life. The idea that the Internet—“cyberspace”—was like a place you could travel to, a Wild West separate from the constraints imposed by governments and society, gripped imaginations. In a famous 1996 essay, Grateful Dead lyricist and Electronic Frontier Foundation founder John Perry Barlow set forth a utopian vision for the independence of cyberspace from powers in the physical world, the “weary giants of flesh and steel.”

At the time, the essay resonated. Anyone could hook their computer up to a phone line, dial a connection, and find their minds and words transported to chat rooms and web pages with people across the globe, even as their bodies stayed sitting in a chair, typing on a clunky beige keyboard.

But cyberspace didn’t develop as Barlow and others anticipated. Today, we don’t glue ourselves to a chair in the corner of a den to “go online.” The Internet is with us everywhere we go, not only on our cell phones and tablets and laptops, but also in our home appliances, in our cars, and on drone-mounted cameras that we fly around outside. Developments in computing technology didn’t lead to freedom from a physical reality—they led to augmenting the reality we already inhabit.

Maybe you wear an activity tracker or wristwatch that records how much you have walked, stood, and exercised throughout the day. Maybe your digital camera or phone automatically uploads the pictures you’ve taken to the Internet. Maybe your smart vacuum has created its own map of the layout of your home in its never-ending journey to remove dust from your carpet. Network-connected devices—better known as the objects that make up the “Internet of Things”—allow both their users and their manufacturers to behave in ways that were not technologically possible even five or 10 years ago. Users gain functionality, such as the ability to automate tasks (like vacuuming) or acquire information that could not easily be gathered before (like counting every step you take). Meanwhile, product makers gain the ability to know and change what devices do after they leave the store. Self-monitoring products can communicate to their makers what they have been up to; manufacturers also can install updates in devices after they’ve been sold, improving their security or changing their functionality.

Innovation or Invasion?

Networked devices raise a host of hard legal questions, largely separable into questions about privacy, security, and property. In the privacy realm, we are beginning to ask whether there should be any limits on what devices can record and share with their manufacturers, and how that information should be used by device sellers and other parties with whom they choose to share that data. In one recent controversy, iRobot, the company that makes and sells the popular “smart” vacuum Roomba, made headlines when it considered selling the maps Roomba makes of users’ homes to third parties.

How much unwanted invasion into our private lives is permissible? We’re all familiar with apps and streaming services that let us play games and listen to music for free in exchange for watching advertisements, but what if our smart toaster makes us listen to an ad for a new brand of English muffin because it knows we are toasting a bagel? Should your blender be able to try to sell you a new brand of diet drinks before you make a smoothie?

Closely related to privacy concerns are security concerns. Networked objects can be hijacked and used to target their owners—one famous example involved someone hacking a network-connected baby monitor and using it to spout obscenities at a small child—but even more troubling are cases in which devices are co-opted not to cause problems for their owners, but to participate in completely separate activities. A distributed denial of service (DDoS) attack last year that was executed by co-opted smart appliances rendered several major websites, including Twitter and Spotify, inaccessible. During a DDoS attack, many computers flood a target computer with requests to overload the target’s systems and stop it from functioning normally. In this case, smart appliances had been hacked and programmed to participate in the DDoS attack, while their owners remained ignorant of the harm being caused by devices in their own homes. The danger in these attacks is that, because the harm is not felt by the networked product’s manufacturer or purchaser, manufacturers won’t necessarily have incentives to make their devices secure enough to fend off outside attacks. Incidents like these spurred renowned security researcher Bruce Schneier to state in testimony before Congress, “It might be that the Internet era of fun and games is now over, because the Internet is dangerous.” Indeed, as more mobile objects—including cars—are designed with embedded computers that run code, we have to grapple with the striking realization that the Internet can become physically dangerous. Hacked or error-filled code can cause a networked device to malfunction and physically harm the surrounding people or environment. Scholars and policy analysts are now asking how the law can be used to create the right incentives to make secure devices and, given that all major software projects inevitably have bugs, what kind of standards can be used to determine if a product is secure enough.

 

What’s Yours is Theirs

The Internet of Things also raises challenging questions about property rights.
While this area may seem less fraught than privacy or security, the question of who has what rights to the objects that make up the Internet of Things has direct and critical implications for privacy and security issues as well.

Personal property rights have historically been simple. Objects such as watches, jewelry, cameras, and vacuum cleaners were usually owned in “fee simple”—purchasers of those products owned, to phrase their rights colloquially, the whole thing, forever. Once a product was sold, the manufacturer’s legal power to direct its use was done. Simply put, you owned the things you bought, and you could do with them as you pleased.

Not so anymore, at least when the objects you buy contain computers and execute software code. If you’ve “upgraded” from a Hoover to a Roomba or from a Rolex to an Apple Watch in recent years, you might be surprised to learn that your “ownership” of that appliance or accessory has become a lot more complicated. You own the chassis—the physical shell of the device, along with the wheels of the Roomba, or the strap of the watch. But, if the device you buy is like most others on the market, you don’t own the copy of the software running inside it. Likely you had to agree to a set of terms when you turned the device on, or your device came with a piece of paper that contained words that stated something like, “This software is licensed and not sold to you.” The terms might have stated that the software was, for example, licensed “for personal use” only, or for “noncommercial” use. They might have stated that the software and its license could not be transferred to another person, or could only be transferred under certain circumstances (such as through an officially sanctioned resale program). Or, they might have specified that the purchaser was not allowed to change the code, or that the device could only be repaired by an officially licensed repair person if the repair involved access to the device’s software.

Courts tend to enforce these license agreements, although scholars disagree about whether it is appropriate for manufacturers to license, rather than sell, the copies of software in smart products. Most device sellers opt to license use of software copies to consumers, denying them the benefits of ownership and imposing restrictions on how the software may be used or transferred. Manufacturers could choose instead to sell the software in their products, but almost none do because using licenses affords them greater control over how the product is used. Selling copies of the program would trigger the “first sale doctrine” and other copyright exceptions, which would give consumers roughly the same rights to the software copy that the common law would give them over purchased nondigital products: the right to have and use the copy and to resell it. In other words, if manufacturers sold the software embedded in their devices, buyers would have roughly all the same rights in their digital cameras and smart watches that they have in their older film cameras and analog watches. Several scholars have critiqued the notion that the copyright statute contemplates the idea that use of a copy can be “licensed” indefinitely, or that a manufacturer can avoid transferring ownership of a copy to consumers merely by stating that the work is “licensed, not sold” and is subject to restrictions. But despite these sorts of arguments, courts typically have found that licensing copies is permitted and does not amount to a sale of that copy.

Product manufacturers further fortify the control that licensing affords by using what’s known as digital rights management (DRM) technology or technological protection measures (TPM). If you have ever had to type in a password to open an encrypted file or log into an account to play a piece of digital media, you have encountered DRM. Recent examples include John Deere’s ongoing attempts to force the buyers of its tractors to only use Deere-approved repair persons—no one else is authorized to access the software inside John Deere tractors. Deere maintains that it has the authority to decide who can access, change, and repair the software inside the John Deere tractors because the tractors’ software is merely licensed to the farmers. Meanwhile, farmers using John Deere tractors have publicly complained that they can’t afford to wait for an official John Deere repair person to come out to their farm if their tractor is broken.

New Policy for a New World

Privacy, security, and property rights in the Internet of Things can interact in complex ways. If consumers aren’t allowed to control the objects they buy, product makers will be more able to effectively monitor their consumers. Ongoing control over products’ software allows product sellers to put out security updates, but also can prevent end-users from taking matters into their own hands if the manufacturers do not. Ultimately, all these issues lead to a key question: who should control the objects in your home, the devices you wear on your body, and the vehicles that transport you?

There are many reasons to prefer a legal regime where ultimate authority over our personal property rests with individual owners, not with manufacturers, but two reasons stand out as particularly resonant. First, consumers are in a better position to know what they need their property to do and when. Second, even when our property isn’t tied up in our economic well-being, personal property still helps us establish our identity and personal autonomy. We create our sense of self in part by constructing the space immediately around us. That process is undermined when the objects closest to us spy on us, advertise to us, or refuse to obey us in favor of their manufacturer.

Our new augmented reality has changed the world in ways that we are still trying to understand. What do privacy, security, and property mean now that objects in our own homes aren’t entirely under our control? As lawyers, we stand in a promising position as these questions are raised. With so many issues still unresolved, we each have the opportunity to steer technology law and policy in the right direction.


Christina Mulligan is associate professor of law at Brooklyn Law School where she teaches courses on cybercrime, Internet law, intellectual property, and trusts and estates.She was recently appointed chair-elect of the American Association of Law Schools Section on Internet and Computer Law. Her scholarship addresses intellectual property, property, and the relationship between law and technology, and her research seeks to better adapt intellectual property law for the digital age. Her work has appeared in a variety of journals and law reviews, including the Georgia Law Review, SMU Law Review, and Constitutional Commentary. She earned her bachelor’s degree and J.D. from Harvard University, where she served as a production and articles editor for the Harvard Journal of Law & Technology.