A new site

For the last few years, I’ve been blogging at Berkeley Law under the title Public Interest Authorship and in a few other venues. With my time at Berkeley at an end, I’m moving my personal blogging here, along with my backlog.

Please excuse all of the broken links / dropped images / etc. in the backlog. They probably won’t be fixed, but what are you going to do?

More to come.

Quantifying termination possibilities — an experiment with HathiFiles

US copyright law has “termination of transfer” processes that allow many authors to reclaim rights after a certain period of time has elapsed. While strikingly powerful (termination rights are inalienable!), they are also relatively arcane. While one can imagine termination of transfer being of important public-interest relevance, as a means to renew public access to out of print or otherwise unavailable works, the relevant authors aren’t likely to participate without support. That is, while news stories emerge from time to time about prominent entertainers seeking to reclaim their rights, it’s hard to imagine building the kind of public understanding around their timing and proper exercise that would be necessary to see them really used at scale by unrepresented parties or for works without much commercial future.

Hard, but not impossible.

The law (17 USC §§ 203, 304(c)-(d)) is hard to read and make sense of, but it’s also largely mechanical. With a little effort, parsing it can be automated—making the law more accessible to folks without lawyers. More on that to come.

But then there’s the question of timing. The termination process was not designed to be easy. It’s only available within a five-year window, and can only be exercised if notice is provided significantly in advance of the actual termination. For anyone not responsible for writing a catalog of 30-year-old hit songs, paying enough attention to get the timing right isn’t terribly likely.

But maybe this too is something where tools can help.

The precise timing of a termination right depends on enough inputs (e.g., when was the transfer? what year was the work published? when was the work copyrighted?) that getting accurate estimates from publicly available records and data just isn’t possible. But the timing is just stable enough that, provided we know the publication year, we can at least guess that we’re in the ballpark. So if all we want to do is generate a list of titles that are close enough to the right time to warrant investigation from their authors? Well, that we can do.

Happily, HathiTrust makes a good chunk of the metadata it’s gathered for its corpus publicly available, giving us access to records for some 14 million volumes, with some measure of information on their current rights status. Not a bad start.

Now, you would expect the scale to be relatively large. Would-be terminators have to provide notice between 2 and 10 years in advance of a time falling within a 5-year window, meaning that at any given time there are as many as fourteen calendar years for which termination notices can be made under any of the Copyright Act’s two termination provisions. Which means there as many as 28 calendar years from which a terminable book can hail at any given moment.

Given that there are some fourteen million volumes in the HathiTrust corpus, the number of included titles that might be presently actionable isn’t likely to be tiny. Restricting titles only to those dating between 1923 and the present, this intuition is pretty well confirmed:

I filtered that slice of the dataset through a few other restrictions to whittle down today’s set of candidate titles. Lopping off non-book items and those already available under creative commons licenses, I grabbed the list of titles that the might be eligible under either the 203 or 304 tests: 2,534,535 entries total.

Now, unfortunately the HathiFiles dataset isn’t entirely clean, and there are all sorts of reasons that publication date might fail as a proxy for the relevant determination tests. But, if your main goal is build an awareness campaign around the availability of termination targeted toward academics, well, it’s not a bad start.

And if 2.5 million is too large a number to be workable, well, HathiTrust has another qualification that can prove helpful: they’ve already determined that certain books are out of print. Looking at just those titles leaves us with just 590 presently actionable titles (take a look at them here) that likely have an availability problem.

It’s still early days on this project; much more to come.

Recent Tabs

I’ve been dormant here for a long time, but meanwhile there’s shortage of important news and events—so much so that closing out my ever-expanding set of browser tabs is looking ever less likely. Here are a few items of note curated from that collection:

Chris Kelty, “Open Access, Piracy, and Scholarly Publication”. I was disappointed to be unable to attend this recent talk at Davis, but the good news is that it was recorded. Chris was the driving force behind the UC open access policy, author of a must-read OA monograph on free software, and a scholar of “open” communities. As such, it’s no surprise that he has an interesting thing or two to say about scholarly publishing. The talk really holds no punches, so if you want to see both Elsevier and institutional repositories taken to task (and harbor hope for something better than either), it’s worth watching.

99% Invisible, “The Giftschrank.” 99% Invisible dives into the very opposite of open access in exploring the German history of locking dangerous texts in “poison cabinets” or giftschranks. Not only does it raise all sorts of questions about the regulation of information (What do we do with dangerous information? Who decides that it is in fact dangerous? What can we learn from what past societies found dangerous?), but it has an interesting copyright nexus with Mein Kampf and all the recent news about that notorious book falling into the public domain in Germany (the Bavarian government, which held the copyright, had used that control to refuse publication of the text—copyright as giftschrank).

Public comments are out from the Copyright Office’s § 1201 study. And public roundtables are being scheduled for May in both DC and San Francisco.

Michael Eisen, “On Pastrami and the Business of PLOS.” OA business models provoke no shortage of ethical and pragmatic concerns. Michael Eisen (the PLOS cofounder/OA advocate/Berkeley biologist) takes a hard and candid look at some of the questions that have been raised about PLOS, and it’s a worthwhile read.

The 20-Year Wait

Here’s a research question: what effect does a 20-year wait for copyright terms to naturally expire have on public perceptions of the public domain?

We’ve run this experiment before, but we’ll soon have the chance to do it again. The Trans-Pacific Partnership, or “TPP,” a landmark multilateral trade agreement has been negotiated and is awaiting final approval of the 12 participating states. While the agreement would impose a number of problematic requirements on the copyright laws of the participants, the most troubling is one that will largely go unnoticed here in the United States: setting “life plus 70” as the minimum copyright term.

Of course, we already did that here with the Copyright Term Extension Act of 1998 (aka, the “Sonny Bono Copyright Term Extension Act,” or more pointedly, the “Mickey Mouse Protection Act”). There’s been plenty of ink spilled on how life + 70 is too long a term, and it’s disappointing that we’re enacting a sizable obstacle that could prejudice our own reform efforts.

But the worst outcome is what this means for the participating countries that currently observe the international standard life + 50-year term. Sure, there are costs to the public, and these countries will inevitably see more decades go “missing” as they fall into copyright’s black hole. And the orphan works problem will be compounded, and deficiencies in recording systems will be more problematic, and all the things we know to expect will come to pass.

But when term extension is retroactive, a strange thing happens: we don’t see new works entering the public domain. In Canada, there’s been a flurry of excitement over the possibilities of a public domain James Bond. Movie remakes, new stories—people are geared up to explore a cultural touchstone in ways they just couldn’t before. This might be the last time this happens in Canada for a while.

Without the regular celebration of Public Domain Day, it’s easy to see how we might normalize the expectation that copyright doesn’t end. Conversely, America’s ongoing public domain hiatus has galvanized movements around copyright’s public role that otherwise might not have happened. I don’t know which reaction has been more powerful, but it’s worrying to think we might come to expect the missing decades and then, forget them.

Users’ Rights and the Rhetoric of Reform

Exceptions and limitations? Or users’ rights? It seems like there’s been a lot of talk as of late as to how the framing of this aspect of copyright law affects prospects for reform.

I’m sympathetic to criticisms of the “exceptions and limitations” framing, and think there’s good evidence that it adds rhetorical clout to attempts to enlarge copyright’s scope. It’s the rule, after all, and not the exception! But I have to be honest: I don’t like the users’ rights alternative much either. I had the opportunity to briefly raise that objection at the CC Global Summit in Seoul, but now that I’m thinking about it I want to elaborate on what I think users’ rights language fails to capture.

First and foremost, I’m not sold on the phrase’s power to speak to creators. Yes, those of us who are deeply embedded in these issues get it: creators are users too, so users’ rights are authors’ rights. But that’s not exactly an intuitive reading, and it’s not helped by the fact that that “user” seems to act, intentionally or not, in contradistinction to “author.”

Beyond that, authors’ interest in exceptions and limitations users’ rights whatever we’re calling them strikes me as much broader than just their interests as users. Yes, improved access to sources, being able to quote and criticize, etc. are all ways in which E&Ls/URs relate to authors’ use of third-party works (or, often ways in which authors build upon their own works after having transferred rights).

But authors also have a stake in E&L/URs as authors. Absolute copyright would frustrate many authors’ interests in having their works preserved, in having their works reach larger audiences through lending libraries, in having their works reach underserved communities such as the print disabled. Sure, that’s a relationship with “authors” and with “users,” but framing a mutually beneficial relationship as a right held by one party seems to downplay its laudable symbiotic aspects. And then there are limits on copyright’s scope that seem to primarily benefit creators without there exactly being users. In this regard scènes à faire and the idea/expression distinction come to mind—these are “limits,” but authors who operate in their shadow aren’t, almost by definition, users of copyrighted works.

I have aesthetic qualms as well. CC’s Ryan Merkley pointed out that it seems like “users” language only comes up when talking about drugs and copyright. There must be a term that feels more empowering than “user”! It comes without its own baggage and I can’t exactly endorse it, but even just “public” feels better to me.

I really haven’t had the time to think of alternative terminology, so this is probably not the world’s most productive nitpicking. But there’s time for that still!

TPP appears to moves forward with term extension intact

Whatever your feelings about free trade, modern trade agreements do much more than remove tariffs. They regulate across all areas of an economy, under the argument that “leveling the playing field” requires more substantive intervention than tax reform.

While many aspects of what we believe to be in the Transpacific Partnership Agreement (or “TPP”) are controversial (here are some environmentalists’ complaints), trade agreement interventions have had an unfortunate tendency to be misguided and contrary to public policy when it comes to issues in intellectual property. For instance, I hope you’ve heard about how the TPP might affect access to medicine, a failing with a profound human cost anyone other than perhaps Martin Shkreli would understand.

In copyright the controversies can be more esoteric, but that doesn’t mean that the stakes aren’t high. Copyright is how we choose to regulate creative and knowledge economies—getting it wrong can compromise our ability to learn, teach, and create. With regard to copyright, the largest change on the table is copyright term extension, an issue that doesn’t stand to affect us here in the United States because we already extended our terms. Back in 1998, we retroactively added 20 years to our copyright terms, protecting works for the life of the author plus an additional 70 years. You’ve heard of the “Mickey Mouse” bill, the one that seemed suspiciously timed to avert the expiration of the rights to Mickey’s earliest copyrighted appearances? This is that one.

The international standard copyright term—the life of the author plus fifty years—is already long, but it was inserted into an earlier, near-global trade agreement and now is widely considered almost unalterable. There’s a lesson there.

There is wide consensus that this kind of copyright term extension is just out-and-out bad policy. Here in the United States, we don’t do a lot of patting ourselves on the back for extending terms. It hasn’t been a win for anyone outside of the handful of entities that control the rights to the few works that continue to prove widely marketable a century after their creation. Even die-hard believers in strong and enduring copyright like those at the Authors Guild concede that these terms last “essentially forever.” That analysis has the support of a prominent group of economists who looked at U.S. term extension and found grants of this length to have a present value nearly equivalent to a perpetual grant.

So why should residents of TPP signatories be wary of joining the United States in their long terms? A few reasons come to mind:

We know that very few works remain marketable through the term of copyright. It remains a relatively rare feat for entertainment, culture, or scholarship to live out an entire copyright term as a commercial work. One of the best studies done on the subject, conducted by the U.S. Copyright Office back when we still required rightsholders to register and renew their copyrights, found that of those few authors who could be bother to register their copyrights, only a very small fraction found it worth renewing after 28 years. Recent empirical work by Professor Paul J. Heald demonstrates this effect by showing how in-copyright books appear to rapidly go out of print.

Long terms are a disservice to those works that don’t remain commercially viable. Our copyright systems don’t exist only to benefit the tiny fraction of works that live out their terms as commercial successes. Every original work of authorship enjoys copyright, from cocktail napkin scribbles to the blog post I’m writing now. Authors of works that never made money, or long since stopped making money, have little to gain from longer terms, and a great deal to lose. Most saliently, long terms mean information regarding ownership will get lost, orphaning works and preventing their further use. Term extensions would be much more palatable if they only targeted commercially available works for which rightsholders are actually motivated to retain ownership.

Retroactive term extension is particularly silly. Many countries see copyright as a bargain: authors receive exclusive rights to their work to incentivize creative endeavors. Copyright owners who seek to enlarge terms after agreeing to the original bargain aren’t retroactively incentivized to create—they’re receiving additional rewards for work they’ve already done. And even if you’re all for rewarding authors (sounds reasonable, right?), remember that term extension is a tool that rewards those who need it least, affecting only those very few rightsholders behind works enjoying enduring commercial success.

Term extension robs the public domain. When copyright terms end, protected works enter the public domain, free for all to use. The public domain is of tremendous importance. It facilitates education and the spread of knowledge by allowing free and low-cost access to classics. It facilitates creativity by providing new authors with raw materials to use however they wish, enabling everything from scholarship to creative reimaginings. It’s even good business: Disney built an empire using film adaptations of public domain stories, Penguin continues to do well by printing classics, and presses like Melville House have creatively and importantly made a business out of the publication of modern public domain texts like government reports. Sadly, however, in the United States, we haven’t been able to celebrate Public Domain Day since 1998.

Extending copyright terms isn’t a condition for doing trade, and it’s not good policy. It’s a giveaway to the minority voices with outsized representation in the secret processes behind the agreement. It’s worth standing up against this kind of copyright term extension and, going forward, against the kinds of closed-door negotiations that have enabled it to creep into the TPP.

The Authors Guild’s Member Survey Findings Are Advocacy, Not Data

The Authors Guild just recently released the “key findings” from its 2015 Member Survey, in an infographic-y ten-page report.

The thrust of the document is that times are bad for writers: income is down, fewer authors are supporting themselves on writing alone, and marketing duties are becoming more time-consuming. Their policy take away? “[C]opyright law and policy need to be tailored to put authors’ concerns at the forefront.”

Though doubtless the report accurately reports on the underlying survey, these aren’t results that should inform or drive any kind of policy—without transparency in methodology or data, the figures presented are a black box. How many were surveyed? How many responded? What’s being left out? Since many of the report’s conclusions were drawn in comparison to a 2009 report, we’ll need the same information from the prior survey.

Now, this information is out there somewhere (I just haven’t been able to track it down). Perhaps the reason things are being released in dribbles is that, taken in context and with the full results, the picture isn’t nearly as damning as the one the “key findings” tries to paint. Here’s Nate Hoffelder at The Digital Reader, full report in hand:

Remember, 89% of the survey group were older than 50 years of age. Half of that 89% was over 65 (629 out of 1,406), with the next largest concentration in the 55 to 64 bracket (425 out of 1,406).

This means that the age groups which were disproportionately over-represented in the survey were also the one that earned the least. . . .

And while we’re on the topic of disproportionate representation throwing off the results, a quarter of the survey group had a JD, MD, or PhD. Around 11% had an MFA, and another 25% had a graduate degree. (Do you suppose that may have biased the results just a little?)

There are all sorts of small details that change the entire picture, but perhaps the most interesting detail . . . was the fact that the 11% of respondents who don’t remember the time of the dinosaurs are earning a heck of a lot of money. Furthermore, the data shows that the younger age groups are earning far more in 2014 than they were in 2009, while the older age groups tended to earn less.

Context, as it turns out, is everything.

Full disclosure: I serve as the Executive Director of the Authors Alliance, another group that does author advocacy (albeit, expressly in the public interest).

The dancing baby puts automated takedowns back in the box?

The big news of the day in copyright land is that the Ninth Circuit decided a meaningful appeal in the “dancing baby” case, Lenz v. UMG. You probably know the one, but if not, the basic background is that Universal Music Group had this adorable dancing baby video taken down because of the badly distorted Prince song playing in the background. The baby’s mother, Stephanie Lenz, lawyered up and tried to take advantage of the little-used and presumably-toothless provisions of § 512(f) to make UMG pay up for abusing the notice and takedown process.

The big takeaways that have many in fair use’s posse clapping are that (1) the opinion resoundingly rejects arguments that would reject fair use as a mere “affirmative defense” rather than an affirmative right, and (2) the opinion injects a little bit more of § 107 (fair use) into § 512 (notice and takedown) by requiring would-be takedown notice senders to consider fair use or risk some (probably minimal) level of liability.

With all legal opinions the devil is in the details, but—in initial my read—this is one to celebrate. There’s lots going on, but the court is trying to strike a delicate balance. Is there a workable way for the nuance of the fair use analysis to mesh with the essentially automatic (if not automated) notice and takedown process? It would be easy to decide this case in a way that favors one or the other without taking up the challenge to make them compatible. My initial impression is that the Lenz opinion does as much as can reasonably be expected to make it all work. Don’t get me wrong: there’s much to recommend the dissent’s points, but my general feeling is that it would be hard to do better.

So let’s get to what I want to talk about: Mike Masnick’s concerns about the opinion’s implications for automated notice and takedown.

In many ways, automated notice and takedown is a scourge. This is the process whereby takedown notices are robosigned to eliminate anything with even the faintest whiff of copyright infringement. It’s how rights holders manage to take down their own postings or legal content that happens to share a name with a major motion picture, to say nothing of countless fair uses. A powerful legal cudgel operated by a mindless robot: what could go wrong?

But, in an important sense, these measures are understandable. From a rights holder perspective, the DMCA is useless if it can’t scale to internet-sized proportions, and it can’t do that without some measure of automation.

So what’s the answer? Well, it probably looks something like what the Court (tentatively and nonbindingly) proposes. Here’s the language (with apparent assistance from the brief from the Organization for Transformative Works and the International Documentary Association):

We note, without passing judgment, that the implementation of computer algorithms appears to be a valid and good faith middle ground for processing a plethora of content while still meeting the DMCA’s requirements to somehow consider fair use. For example, consideration of fair use may be sufficient if copyright holders utilize computer programs that automatically identify for takedown notifications content where: “(1) the video track matches the video track of a copyrighted work submitted by a content owner; (2) the audio track matches the audio track of that same copyrighted work; and (3) nearly the entirety . . . is comprised of a single copyrighted work.”

Copyright holders could then employ individuals . . . to review the minimal remaining content a computer program does not cull.

(citations omitted). Essentially, the idea is to automate that which can reasonably be automated. Literal reproductions of entire works without additional user-provided context have the largest burden to climb to in the fair use analysis. And that part of the analysis could, conceivably, be more readily outsourced to machines.

Yes, it’s doubtful machines are in a position to handle the more difficult prongs of the fair use test. And, without that context, it’s a certainty that machines will get things wrong. But I’d rather live in the world where automated notice and takedown were restricted only to those instances where it might reasonably be capable of achieving the right result occasionally, rather than what we seem to have got now, where it’s more or less free to run amok. It might not satisfy the TechDirt set, but that interpretation of today’s decision would serve to put automated notice and takedown at least partially back in the box.

Inequality and the Rise of the Antifree

In the aftermath of the Great Recession and Occupy Wall Street, it’s hard to ignore the role that inequality plays in the creative ecosystem. As many continue to bemoan the pitiful state of remuneration for creative pursuits (documented, for instance, in the admittedly limited ALCS author earnings survey), should we be asking whether there’s enough money in the enterprise, or should we more concerned about where the money already in the system is going? Put another way, are the current problems with author remuneration to blame on the total size of the market, or on how and where existing returns are distributed?

I don’t mean to create a false dichotomy—obviously, one could easily answer “both!”—but my feeling is that taking the time to apportion responsibility has a great deal of relevance to what our policy prescriptions should be (assuming, of course, that author remuneration is your primary policy concern). My tentative hypothesis is that, by assuming the problem of author remuneration is primarily one of total market size, too many well-intentioned individuals turn to copyright-based solutions (extending term lengths, “notice and staydown,” etc, etc.) for what, at core, has not traditionally been a copyright problem.

Quite some time ago now, the editors at n+1 did a great job of capturing the spirit of current backlash against the current state of affairs in “The Free and the Antifree.” The authors trace the rise of “the free”—that is, ostensibly, the free culture movement—and the subsequent appearance of its antithesis, the “antifree.” In this dialectic, the free culture movement brought about crisis by undermining writers’ abilities to get paid. n+1 is chiming in as a voice of the “antifree,” what they’re calling those who want to see culture-making be a paid activity. They write:

In the argument between the free and the antifree, we’re with the antifree. Across a whole range of issues, a simple defense of intellectual property is right now a rebuke to the corporations, not a sop to them. “Show me the money” is a necessary slogan at a time when giant firms leverage a million retirement accounts for a split-second gain in the ominously named dark pools of the financial world.

(emphasis added.) This argument elides the important distinction between the existence/strength of intellectual property rights and the mechanics of being paid.

For better or for worse, the copyright approach to author remuneration is primarily laissez-faire.[1] The right to exclude others from your work makes it possible for you to reap the returns the work receives on the market. And so it turns out that, if the prospect of unrewarded labor is what concerns you, copyright is a particularly perverse system to which to turn. Copyright never guaranteed a wage, it merely provided a point of entry into a market.

And exclusivity (read: “copyright”) does not create demand and markets aren’t just. Commercial success is a poor proxy for impact, significance, or quality. If we were to reckon success by dollars, we’d find that most cultural output is loss, not worth the energy expended on it and this would be true regardless of the porousness of the prevailing copyright regime. By one 2007 estimate, seven out of eight traditionally published books are losers.

And of those works that become winners, it’s hard to divine any particular sense of deservedness. Putting aside his own copyright issues, Robin Thicke became a best-selling songwriter—moving more than 15 million copies of his single “Blurred Lines” and topping the charts in more than a dozen countries—after having the good fortune to be high in a recording studio with Pharrell Williams.

One thing we know for sure is that, whatever the state of author remuneration generally, today’s winners are winning bigger than ever before. James Patterson (with the help of his team), apparently, accounts for one out of every seventeen hardcover books sold in the United States, and Forbes has his annual earnings regularly approaching $100 million. If you add in the sales of other megahits, there isn’t much of a market left for those titles are that aren’t blockbusters.

There are plenty of reasons why we would expect to see extraordinary successes in the present market for cultural goods. First and foremost, that’s increasingly how businesses structure their activity—just ask Anita Elberse, who makes a compelling case for why the blockbuster model is just smart business.

Beyond that, a globalized cultural economy has greatly extended the horizon for successes. Hollywood now makes some 70% of its revenue from international markets. And mass culture makes for big returns: low (and still dropping) marginal costs of production means that, once a firm’s initial investment in production is paid off, an enviable amount of any given sale manifests as profit. The larger the potential market, the greater the capacity of the firm to make the most of these tremendous margins.

Even Taylor Swift, who was again in the headlines for shaming Apple into paying artists during the free trial period of its new music service, knows she doesn’t need help, writing that, “This is not about me. Thankfully I am on my fifth album and can support myself, my band, crew, and entire management team by playing live shows. This is about the new artist or band that has just released their first single and will not be paid for its success.” Sadly, she’s among the smallest handful of the one percent of artists for whom three months of streaming royalties from a single service would actually mean real money—while she’s right that Apple should have been paying those royalties, her hypothetical “new artist” would have to be extraordinarily lucky to get a sandwich out of the added income.

Looking back on the markets of yesteryear, it can seem like our system of cultural production superficially resembled something that seemed fair enough to labor. When the system worked best, remuneration-wise, it essentially did so using intermediaries as venture capitalists. Publishers, labels, and studios pooled risky investments in works of authorship in order to profit from the blockbuster success of a just a few titles. Chances were always high that your book/album/film would be a commercial dud, but you could still be bankrolled on the substantial earnings of one of your peers.

Now, this doesn’t mean that 20th century cultural production was utopic. Intermediaries wouldn’t invest in everything under the sun and there’s no doubt that, by serving as gatekeepers, they snuffed out the worthy hopes of many talented, hardworking creators. And while it’s possible that the firms involved have since gotten more hard-nosed, potential commercial prospects have always been the most important determinant of up-front remuneration. Such is business.

Perhaps the lesson here is that, absent the real support of institutional intermediaries, markets for copyrighted goods don’t provide the financial foundation most creators need to ply their craft. At least, they didn’t seem to in the pre-digital age, and they most certainly don’t now. As authorship becomes increasingly disintermediated, creators are made to hitch their wagons to their own stars, which, as we’ve seen, most often plummet straight back to earth. This isn’t just happening with creative professions, either: in the post-recession world, making ends meet without institutional support is the new normal. Welcome to the Gig Economy.

This last point raises an important question: is what’s happening to authors terribly different to what’s happening to the rest of the world? Looking at book publishing, the story sure seems awfully familiar. Industry employment is down and production rests on backs of contractors from the precariat, while industry profits are up and the 1% are realizing unprecedented returns. As a millennial, that sounds like business as normal regardless of the industry. No wonder our cultural economy is plagued by inequality and financial instability when the rest of our economy is as well.

All of which is a convoluted and overly wordy way of saying that most proposals for expanded/strengthened copyright[2] are unlikely to do much to remedy the reality of author remuneration for most people. We should know better by now than to think that a rising tide lifts all boats—peripheral expansions of copyright might increase the bottom lines of the lottery winners already astride the world, but they’re hardly likely to make a difference for those authors whose plights are most often invoked in debates about intellectual property. For them, we need support that isn’t completely tethered to the market, institutions that have a mission beyond profit, and collaboration between authors. We need a system that resists providing outsized rewards to a select few in order to better support the creative labors of many. That might be a tall order, but who ever said change was easy?

[1] I think it’s fair to say that the core of the regime envisioned by U.S. copyright law is market-driven, but certainly not all facets of the law are so easily explained—Section 203 comes to mind. On this score, look for forthcoming work from Berkeley Law’s current Microsoft Fellow Kevin Hickey that promises to unpack some of Copyright’s more paternalistic turns.

[2] Now, not all proposals here are created equal. For instance, many would like to see terrestrial radio play royalties to owners of sound recording rights the way they presently pay owners of music composition rights. In many respects the present, bewildering situation here is an historical accident, and it seems fairly straightforward to most people that radio play is something for which recording artists deserve compensation. No argument here.

Getting right into the Thicke of it

The Thicke case has me thinking about one my favorite/least favorite parts of copyright law: music copyright. There’s been much made of the fact that the Gaye copyright was limited to the sheet music. Now, I think that holding was probably right, resting as it does on the 1909 Act. But the way that music compositions and sound recordings are treated in copyright law has troubled me for some time. All this kerfuffle over the Thicke trial led me to revisit a paper I started some time ago but never managed to finish, in which I argue against the dual-copyright approach to music.

The rest of this post is just an extended excerpt, so read on only if you’re really into the more tortured details of music copyright. Continue reading “Getting right into the Thicke of it”