Technology’s role as a medium and mediator of communication, interaction, and commerce places technologists in the often unwanted position of being asked to implement or alter social policy with executable code. >>>> Technology firms have been taken to task by policymakers for their inability to enforce rules about access to controversial information, their intentional and inadvertent capture of information about individuals’ consumption of goods and services, and their inability to readily identify individual Internet users. Legislatures, courts, and public advocates have each at times pressured technology firms to develop their products in ways that buttress specific practical outcomes.
Due to widespread copying of copyrighted material on the Net, often through peer-to-peer exchanges, copyright owners are increasingly pressuring technology firms to build digital rights management (DRM) into their systems. Hollywood has gone further, telling the U.S. Congress that firms that don’t comply voluntarily should be required to do so. They are thus caught between the expectations of consumers wanting to be able to rip, mix, and burn to their heart’s content and content owners’ expectations that technology firms should use DRM to help protect the content industry’s products.
Machine-readable rules that control access to digital works are likely to inhibit, restrict, or altogether prevent many legally authorized uses.
Several international standard-setting activities are under way to build rights expression languages as the basis of the DRM systems clamored for by the content industries. Congress has been told by some copyright owners that the technology industry isn’t cooperating. Some of its members have been persuaded that technology firms should be obliged to build DRM into their products; other members believe legislation is needed to protect users’ rights in content protected by DRM. The Federal Communications Commission may soon issue rules requiring DRM in over-the-air high-definition television programming and devices. Meanwhile, the public and its advocates, along with many copyright scholars, voice their concern that DRM—whether legally mandated or privately adopted—will lock up information in ways that thwart individuals’ and institutions’ rights to read, lend, resell, mix, and build on copyrighted works. A growing number of technology firms are deeply concerned over the dumbing down and locking up of the desktop computer.
What are technologists to do? Hopefully this special section provides a starting point for considering the options.
The articles by John S. Erickson and Pamela Samuelson provide overviews of the technical and legal landscapes. Erickson explores DRM architecture and its relation to trusted computing platforms, as well as the disconnect between the security paradigm from which today’s DRM systems originate and the exception-riddled, context-laden nature of copyright law. He suggests a DRM architecture that would provide enough space for the exercise of fair use-like rights.
Samuelson covers the varied relationships between DRM and the law, explaining that DRM provides potentially far more control to copyright holders than the law provides or permits and that, in its current legal interpretation, the Digital Millennium Copyright Act (DMCA) of 1998 provides nearly unlimited protection to DRM. This special status, she writes, creates a risky environment for those who wish to circumvent DRM to exercise historically protected rights to use information. Warning that DRM, whether through technical standards or congressional mandate, threatens to further erode the public side of the copyright balance, she calls on computing professionals to defend general-purpose computing technologies and support legislative consumer-protection measures related to DRM-protected content.
Julie E. Cohen focuses on the privacy incursions enabled by DRM. From limiting what goes on in the privacy of one’s own home to exposing what occurs there to outside view, DRM poses a range of special threats to individual privacy that will potentially interfere with individual autonomy and chill intellectual inquiry. She notes the current lack of guidance as to the proper scope of privacy in the digital age, suggesting that courts have the tools to redefine privacy injuries to recognize the kinds of intrusions facilitated by DRM. Finally, she encourages the design of privacy-protecting features into DRM standards and products.
Séverine Dusollier covers the European Union’s approach to DRM. The EU Directive on Copyright and the Information Society of 2001 sorts out the policies to be implemented through DRM. It motivates copyright holders to build protections for user rights into DRM. It also directs EU member states to take measures ensuring user rights can be exercised wherever content is protected by DRM if private ordering fails to provide adequate protections. While the EU approach differs decidedly from its U.S. counterpart, Dusollier concludes it is likely to engender similar questions about the appropriate scope of private ordering versus public decision making regarding limits on information use as set by DRM. She bases this conclusion on the Directive’s lack of guidance regarding the steps required to protect users before governments are required to step in, as well as on the existence of an exemption to government obligations for content delivered on demand. She finds that, like the DMCA in the U.S., the Directive privileges private ordering over copyright policy.
Edward Felten asks us to view DRM skeptically. In both theory and practice, he argues, DRM is an unproven tool. Weighing the complexities of building fair use into DRM, he raises grave doubts about the ability of technologies to accurately accommodate even the simple cases of fair use (such as making a backup copy or a copy for exclusively in-home use). Felten concludes that fair use is beyond the capacity of current technology and is likely to remain that way.
Finally, Barbara L. Fox and Brian A. LaMacchia propose creating a legal "safe harbor" to help technologists experiment with DRM architectures and applications that factor in the public’s side of the copyright balance—without exposing themselves to claims of contributory copyright infringement. They elucidate the constraints experienced by technologists in light of today’s legal uncertainty. If they are not required to build mechanisms accommodating some aspects of fair use or first sale, is there exposure for technologists or the firms that design and build in such features? One can read the article as a call for a DRM mandate of sorts comprising some set of copyright norms currently agreed to be protected by the fair use doctrine; its technical facilitation would be categorically immune from claims of contributory copyright infringement. Fox and LaMacchia thus provide an interesting approach to creating breathing room for technologists and policy wonks alike to develop more flexible, context-dependent DRM architectures and systems.
Whose Rules?
That privately constructed rules may circumvent or conflict with societal values and public policy is well known and has many manifestations, many predating the Internet and computers. The question of whose rules should govern and the space in which private rules can constrain or contradict democratically instituted social policies is a long-standing one. The use of, for example, property rights, states’ rights, and other proxies for private interests has a long legacy in law and social practice. Today, while the law allows average citizens to time-and-device shift music and movies they own, and the First Amendment of the U.S. Constitution allows them to engage in parody, the medium of delivery or device may independently limit their ability to do so.
Such default limitations arise in part because the security model underlying DRM architecture is a poor fit for modeling copyright policy. DRM architecture, which is based on binary permit/deny schemas, envisions copyright holders unilaterally setting the terms under which their products are used. Copyright law is, however, multidirectional.
The U.S. Copyright Act of 1976 provides a framework allowing "rights" to flow from several sources: the owner of the object (or copyright holder), a third party (including the government), and the user. While copyright holders are given a set of exclusive rights, these rights are subject to exceptions. Moreover, while the exclusive rights themselves—to reproduce, distribute, publicly perform, publicly display, and prepare derivative works—may seem all-encompassing, they in fact leave many uses of copyrighted works unregulated. For example, copyright law leaves the private use of copyrighted materials essentially unregulated. The Act itself does not empower copyright holders to require readers, viewers, or listeners to seek authorization before engaging in private uses (such as selling a book, lending a music CD, or reading aloud to a child). Privacy—crucial to the full exploration of purchased works—is protected by the structure of the Act, as well as by the "real space norms" regarding use of copyrighted works and the constitutional protections for speech, freedom of association, and access to information.
The limitations on copyright’s exclusivity also extend to activities affecting the commercial value of a work; for example, the "first sale" doctrine allows purchasers of legal copies of works to dispose of them in any manner they choose. Copying, even for the purpose of publishing excerpts in a commercial publication, receives substantial protection under the doctrine of "fair use," an especially open-ended part of the Copyright Act. Determining whether a use is fair often requires fact-intensive litigation, but the Act’s flexibility has contributed to the ability of U.S. copyright law to accommodate new technology and protect the kinds of expression and innovation it is meant to promote.
Are today’s DRM systems poised to give rights holders too much control over the use of copyrighted works? Machine-readable rules that control access to digital works are likely to inhibit, restrict, or altogether prevent many legally authorized uses. Written by rights holders and offered on an accept/reject basis to purchasers, these rules are likely to supplant copyright law in many contexts. As a result, the balance remaining in copyright policy—reflecting the interests of many groups, including copyright holders, creators, and purchasers of that content—stands to be replaced with contracts and machine-readable, machine-enforceable "code constraints" reflecting and upholding the interest of the rights holders alone.
Technologists have an opportunity to change this outcome. As writers of code, believers in the multipurpose computer, voters, and pundits, they may be most able to do so. Whether mandated or privately developed, the inability of DRM to accurately reflect the rights and responsibilities of copyright holders and users alike urges caution and care in their development and implementation.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment