Content aggregators should pay attention to a recent settlement between the New York Times and GateHouse Media. According to Online Media Daily, the New York Times agreed to remove headlines and first sentences of GateHouse articles referenced by the New York Times on one of its content aggregator sites, Boston.com. Is this case part of a trend?
Here's the legal issue in a nutshell: Just about everybody agrees that people who create original content should be able to protect their work. They are the copyright holders. Also, just about everyone agrees that the free flow of information is necessary for a knowledgeable and engaged society. Thus, information needs to spread easily. It's not hard to see that there is a natural tension between these two premises. The law attempts to resolve the conflict using the doctrine of "Fair Use." Under this doctrine, it's okay to use other people's content in limited ways and at limited times. Whether you're stealing content or just making 'fair use' of someone else's content is often not an easy call. Reasonable minds can and do disagree on this topic.
Was the New York Times stealing GateHouse's content by republishing their headlines and the first sentences of their articles? Or was it making 'fair use' of the content by using limited parts (only the headline and first sentence) in a limited way (a link from its news site)? It's an important issue because many many online media properties link to articles on other sites while referencing the article's title and first sentence. Content aggregation is huge business! And often very useful for the public. What will happen to content aggregation as a business model if they have to create unique headlines and 'blurbs' for each article they want to link to?
Because this case (and a couple others like it) were settled privately, we don't know how a judge would have ruled. on the issues. For my part, I'm inclined to think that the limited use described here falls under the Fair Use doctrine. Maybe the New York Times settled because it was afraid of getting a bad precedent that would shut down this common content-sharing strategy altogether? Perhaps it was safer to enter into a licensing agreement with this one company rather than risk losing a business model...
The real kicker is what a bizarre business move this appears to be for GateHouse. They've made it more difficult for the New York Times to link to them. Whu-hoo. Congratulations GateHouse--Less traffic!
Best Regards,
Sarah
P.S. Big hat tip to Michael Martinez for bringing this issue to my attention. :)
Here's the legal issue in a nutshell: Just about everybody agrees that people who create original content should be able to protect their work. They are the copyright holders. Also, just about everyone agrees that the free flow of information is necessary for a knowledgeable and engaged society. Thus, information needs to spread easily. It's not hard to see that there is a natural tension between these two premises. The law attempts to resolve the conflict using the doctrine of "Fair Use." Under this doctrine, it's okay to use other people's content in limited ways and at limited times. Whether you're stealing content or just making 'fair use' of someone else's content is often not an easy call. Reasonable minds can and do disagree on this topic.
Was the New York Times stealing GateHouse's content by republishing their headlines and the first sentences of their articles? Or was it making 'fair use' of the content by using limited parts (only the headline and first sentence) in a limited way (a link from its news site)? It's an important issue because many many online media properties link to articles on other sites while referencing the article's title and first sentence. Content aggregation is huge business! And often very useful for the public. What will happen to content aggregation as a business model if they have to create unique headlines and 'blurbs' for each article they want to link to?
Because this case (and a couple others like it) were settled privately, we don't know how a judge would have ruled. on the issues. For my part, I'm inclined to think that the limited use described here falls under the Fair Use doctrine. Maybe the New York Times settled because it was afraid of getting a bad precedent that would shut down this common content-sharing strategy altogether? Perhaps it was safer to enter into a licensing agreement with this one company rather than risk losing a business model...
The real kicker is what a bizarre business move this appears to be for GateHouse. They've made it more difficult for the New York Times to link to them. Whu-hoo. Congratulations GateHouse--Less traffic!
Best Regards,
Sarah
P.S. Big hat tip to Michael Martinez for bringing this issue to my attention. :)
I'm glad someone out there is paying attention to the legal aspects of what we do. I think we need more of these kinds of posts.
What I find amusing is that GateHouse invokes the argument that: "Boston.com provided links that sent readers directly to "Wicked Local" stories. That meant readers bypassed ads posted on GateHouse pages and could be confused as to the source of the original reporting"Now, WickedLocal actually displays the SAME banner and skyscraper ads on ALL pages; so as long as the readers hit any page on wickedlocal.com they don't bypass anything.
As for getting confused regarding the source of the original reporting:
I don't know, this story is pretty bizarre.
I agree with @qeorge - if you don't want sites to SYNDICATE your content (as in, "verbatim copying a competitor's content), get rid of the RSS feed. By nature, a RSS feed is exactly what it stands for: Really Simple Syndication. If you have one, you're sending the message: please syndicate my content.
Thank you for this and your other legal posts...I am always looking for way to cover my butt from cry babies and jerks out there!
If you've got an RSS feed, I don't see how you can be upset about syndication. I hope this one would have fallen on the side of fair use if it had gone to verdict.
Thanks for the write up, I enjoy these legal posts.
Side note: Your link to Martinez goes to javascript:void(0);
Thanks--link fixed!
It's an ongoing concern. Partly b/c I'm a social media analyst, I'm getting a degree in economic law in the EU, specialising in content protection and data privacy (EU is all about data privacy whereas US is all about copyright/content protection). In my research, I think the best and most effective way is for the network and communities that share content to discover, through reading and knowing the network, those that use the content of others and don't offer any credit. My favorite story so far is here: https://www.socialmediaexplorer.com/2009/04/15/fan-communities-online-content-and-the-definition-of-plagiarism/It's a good example of the public protecting its own and the private company (YouTube) responding to the concerns of the "Lost" public.
I will love if someone throws more light and give more insights.
I have always been afraid of content being stollen from my website.
I've always wondered how Google gets away with this? They take a cache of someone's site and store nearly the whole thing. Surely that's breaching copyright.
I guess most people don't care, because who wouldn't want to be on Google? But surely some no-win-no-fee superstar has given it a go?
I can't remember everything, but Agence France Presse had a similar claim against Google News if I'm not mistaken. They actually had to post some silly notice on their homepage as punishment (I want to say it was on the Belgian Google).
I'm pretty sure there was a different case against Google for indexing a site or part of a site that someone didn't want indexed.
Since the nature ofsearch is opt out rather than opt in I thought that was interesting. Though from a common sense perspective, I class all of these people in the village idiot category. If you're going to bother with putting content online, it's not that hard to password protect it, write a robots.txt file or ban search engines completely if you really want it out of the indexes.
I really hope that in one of these cases "failure to take 1 day to learn how the interweb works" is a countersuit.
>I really hope that in one of these cases "failure to take 1 day to >learn how the interweb works" is a countersuit.
Alas, in my experience, common-sense and the law don't always dine at the same table.
White Rabbit, the judge in the Fields vs. Google dispute decided Google's cache was "fair use" and not breaching copyright. While that decision isn't necessarily intended as an authoritative "all-in-one" answer to what you commented above, Google and other search engines can use that if someone sues them for something similar.
David
Many involved in online journalism have unfortunately assumed that the case was about some broadly applicable principles, rather than the actual facts about what was going on with competing community news and information websites for Newton, Needham, and Waltham, Massachusetts and whether the verbatim copying of a competitor's content is fair use.
I would encourage you to read Dan Kennedy's piece for the Guardian. He links to the report of Professor Doug Lichtman, submitted on GateHouse's behalf and part of the public record of the case, for a better understanding of what the case was really about, and what it was not.
Consistent with the point of your post, GateHouse encourages online journalists and bloggers to link to our sites within the boundaries of fair use and our Creative Commons license. And our agreement with the New York Times allows linking as well. That has never changed.
Greg Reibman
Editor in Chief, Metro Unit
GateHouse Media New England
Perhaps I'm not reading these documents correctly - but it seems to me that the SEOMoz article is spot-on in summarising the case. GateHouse got huffy-puffy over NYT scrapping the lead paragraphs and then linking to the rest of the article on WickedLocal.
Have to agree with @qeorge and @claye: this is called syndication. As long as you have an RSS feed you shouldn't have a problem with people syndicating your content.