Category — Copyright Law & IP
One of the most firmly entrenched academic practices centers upon the use of textbooks as the fundamental drivers of curricula. Ultra-expensive, these items represent one of the largest costs for public school systems as well as those attending college.
As the digital age continues to work its way into the stuffy world of academics, there are clear indications that textbooks are gradually being phased out in many areas of the country. The sheer volume of resources available on the net is leading many school districts to create and share their own materials.
Macmillan, considered one of the largest players in that old, conservative world, apparently has now also seen the “handwriting on the wall.” The company recently announced it will offer academics an entirely new format: DynamicBooks.
The Wikipedia of Textbooks
The new, digital textbook format introduced by Macmillan has been dubbed by the New York Times as a kind of “Wikipedia of textbooks.” New software will allow college-level instructors to edit digital versions of e-textbooks, enabling these professors to customize the texts for their individual courses.
In addition to having the ability to reorganize and/or delete entire chapters or sections of the text, professors will be able to upload their course syllabus as well as any other supporting materials that have been created for the class: notes, videos, pictures and graphs. Offering significant potential cost savings (half the price of physical textbooks according to the Times), this format will allow all course materials be placed in a single digital location, a feature that should prove to be a godsend for students.
But it is yet another step that Macmillan is taking that is drawing the greatest attention. The phrase “Wikipedia of textbooks” speaks directly to that concept, the ability of professors to rewrite paragraphs and add their own equations, drawings, and illustrations.
While this step will allow most professors to do what they already do in a more efficient manner, the idea is not sitting well with the traditionalists who see the intellectual property within such books as proprietary. The further blurring of copyright laws as professors create their own content and intermingle that work with the published textbook authors is an enormous issue for those who have made a living in the textbook field.
Most of the concerns center on a format that is ripe for plagiarism. But the editorial staff at Tufts Daily is calling the concept risky for other reasons.
TD expressed extreme concern that professors would have direct editorial control over the content of the textbook yet would not be required to cite sources for the changes made nor need approval from either the publisher or the authors of the textbook. In addition, TD is concerned with another of the focal points of textbook traditionalists.
Apparently a significant number of the textbooks that will be available are those currently utilized in “large survey courses in the sciences.” While all professors no doubt altered the material to some extent in their individual courses, the traditional textbook had served as a standard reference for students.
According to TD, not only were students able to reference the textbook to discern greater clarity of the specific material that has been presented, the books provided students the essential content deemed relevant to the topic. But now, TD fears those books could well be devoid of relevant topics or critical background material.
In addition, TD notes “that professors may change the text with biased or even false information,” could “accidentally miswrite a definition or make an error in a formula or equation.” Any such errors would no doubt be detrimental to the students taking the course.
TD further insists students should not be the ones to face consequences for these errors or biases, that professors “should not be allowed to edit textbook content without review by the publisher or the textbook author.” And while TD offers support for the field of digital textbooks due to their ease of use and accessibility even as they reduce textbook costs, the editors insisted that allowing such edits did “not outweigh the potential problems that it could cause.”
Instead, professors should not be provided unchecked editorial control over the textbook as it ultimately “jeopardizes the reliability of course material for students.”
Macmillan Moving Forward
Despite these concerns, Macmillan plans to start selling about 100 titles through DynamicBooks. Some of the reported texts that will be available come August include: Chemical Principles: The Quest for Insight, by Peter Atkins and Loretta Jones; Discovering the Universe, by Neil F. Comins and William J. Kaufmann; and Psychology, by Daniel L. Schacter, Daniel T. Gilbert and Daniel M. Wegner.
The books will be available at college bookstores, the DynamicBooks web site and through CourseSmart. Accessing the DynamicBooks editions will be possible either directly online or by downloading the text to a laptop or iPhone.
And of the cost savings, the Times noted one concrete example. The aforementioned Psychology has a list price of $134.29 when sold in its traditional format. The version that may be altered by a professor will sell for $48.76 when accessed through the DynamicBooks concept.
The altered versions will also be available in print on-demand version from Macmillan. However, when students opt for that format, the cost will revert nearly to the original list price.
100% Support for New Concept
Given the costs associated with textbooks, any step taken to reduce the outlay by students or schools is a welcome one in this corner. The fact of the matter is the current knowledge explosion renders most books out-of-date within a matter of months after publication.
In addition, no text is ever a perfect match for a course and the students taking that course. Every teacher makes modifications on at least a weekly or monthly basis, supplementing and deleting whenever such a step makes sense for the students they are entrusted with.
Kudos go out to Macmillan for taking a step other publishers have held back on: the level of customization that comes with being able to edit and supplement at the sentence and paragraph level. The option also allows for those delivering course content to collaborate and share material that is known to work best with students and include that in the basic course materials.
That inherent question, should professors have the right to edit and alter materials, is essentially a non-starter anyway. The bottom line is every good instructor does just that, altering and supplementing as he or she deems appropriate.
But now colleges, and hopefully one day K-12 school districts will be able to save hundreds of dollars even as they continue that long-standing practice of offering students an anchor text.
February 25, 2010 5 Comments
Eliminating Control – Mark Pesce on the potential of a shared and connected, opensource educational environment.
In the process of web surfing, there are times you stumble on some gems – some material so transcendent you find yourself spellbound.
Such is the case with the work of Mark Pesce at The Human Network. David Parry, assistant professor of Emergent Media and Communications at the University of Texas at Dallas, offers his assessment of Pesce’s work on his AcademHack blog:
“I find Pesce to be one of the more provocative thinkers on the internet and matters of cultural transformation. I am not sure I always agree with what he suggests, but this is also one of the reasons I find him worth reading.”
“In this series I read each piece at least twice,” states Parry, “some three times. They are that good.”
To fully grasp how education can be transformed by technology, we begin by taking a peek at Pesce’s Fluid Learning. But before we do so we turn back to our trilogy from last February, our review of the digital commons.
We noted the Committee on Economic Development’s report, Open Standards, Open Source, and Open Innovation: Harnessing the Benefits of Openness, that touts the success of the “Digital Commons” approach. The report notes the “benefits of openness” and insists that continued openness is critical for further growth.
Most importantly, the report challenges the thinking of those who view the digital world in the same manner as that of the physical world. And if one can begin to think about how we might replace the current physical construct for education amongst this new digital age, we perhaps finally see where a new learning model emerges.
“It’s all about control.
“What’s most interesting about the computer is how it puts paid to all of our cherished fantasies of control. The computer – or, most specifically, the global Internet connected to it – is ultimately disruptive, not just to the classroom learning experience, but to the entire rationale of the classroom, the school, the institution of learning. And if you believe this to be hyperbolic, this story will help to convince you.
“Flexibility and fluidity are the hallmark qualities of the 21st century educational institution. An analysis of the atomic features of the educational process shows that the course is a series of readings, assignments and lectures that happen in a given room on a given schedule over a specific duration. In our drive to flexibility how can we reduce the class into essential, indivisible elements? How can we capture those elements? Once captured, how can we get these elements to the students? And how can the students share elements which they’ve found in their own studies?”
Pesce offers four recommendations:
Of course, recording everything creates enormous new challenges. It “means you end up with a wealth of media that must be tracked, stored, archived, referenced and so forth.”
In Pesce’s eyes capturing everything means no front-end decisions as to the worthiness of any material. Just capture and let the natural course of events determine its value.
In a move analogous to the recent open courseware available from Stanford and MIT, Pesce also notes, “While education definitely has value – teachers are paid for the work – that does not mean that resources, once captured, should be tightly restricted to authorized users only. In fact, the opposite is the case: the resources you capture should be shared as broadly as can possibly be managed.”
In making this mindset shift, Pesce explains:
“The center of this argument is simple, though subtle: the more something is shared, the more valuable it becomes. You extend your brand with every resource you share. You extend the knowledge of your institution throughout the Internet. Whatever you have – if it’s good enough – will bring people to your front door, first virtually, then physically.”
Next instead of commercializing, Pesce suggests a look at the open-source solutions.
“Rather than buying a solution,” states Pesce, “use Moodle, the open-source, Australian answer to digital courseware. Going open means that as your needs change, the software can change to meet those needs. Given the extraordinary pressures education will be under over the next few years, openness is a necessary component of flexibility.
“Openness is also about achieving a certain level of device-independence. Education happens everywhere, not just with your nose down in a book, or stuck into a computer screen.”
And Pesce means open, fully open – thus filtering must be eliminated.
“The classroom does not exist in isolation, nor can it continue to exist in opposition to the Internet. Filtering, while providing a stopgap, only leaves students painfully aware of how disconnected the classroom is from the real world. Filtering makes the classroom less flexible and less responsive. Filtering is lazy.”
As for the most transformative element, Pesce indicates it might well be the connective elements we now have available. His words mirror those of the recent Digital Youth Project survey, one that insists that social networking is fundamental to students using the computer and the internet as educational tools.
“Mind the maxim of the 21st century: connection is king. Students must be free to connect with instructors, almost at whim. This becomes difficult for instructors to manage, but it is vital. Mentorship has exploded out of the classroom and, through connectivity, entered everyday life.
“Finally, students must be free to (and encouraged to) connect with their peers,” adds Pesce. “Part of the reason we worry about lecturers being overburdened by all this connectivity is because we have yet to realize that this is a multi-lateral, multi-way affair.
“Students can instruct one another, can mentor one another, can teach one another. All of this happens already in every classroom; it’s long past time to provide the tools to accelerate this natural and effective form of education.
The Universal Solvent
As for how it all might work, take a trip down the “what if” of universal connectivity and sharing, of opening and capturing everything.
As one school places materials online, Pesce believes that a natural altruistic nature will prevail causing others to begin to follow.
“It’s outstanding when even one school provides a wealth of material, but as other schools provide their own material, then we get to see some of the virtues of crowdsourcing. First, you have a virtuous cycle: as more material is shared, more material will be made available to share. After the virtuous cycle gets going, it’s all about a flight to quality.”
“When you have half a dozen or have a hundred lectures on calculus, which one do you choose? The one featuring the best lecturer with the best presentation skills, the best examples, and the best math jokes – of course.”
Of course, there would be a need to obtain student input to reach that level of information. We also would need a cataloging type site.
“Why not create RateMyLectures.com, a website designed to sit right alongside iTunes University?” asks Pesce. “If Apple can’t or won’t rate their offerings, someone has to create the one-stop-shop for ratings. ”
And the real possibility for transcending education as we currently know it?
“When broken down to its atomic components, the classroom is an agreement between an instructor and a set of students,” writes Pesce. “The instructor agrees to offer expertise and mentorship, while the students offer their attention and dedication.”
But schools as we know them – are they necessary?
“The question now becomes what role, if any, the educational institution plays in coordinating any of these components. Students can share their ratings online – why wouldn’t they also share their educational goals? Once they’ve pooled their goals, what keeps them from recruiting their own instructor, booking their own classroom, indeed, just doing it all themselves?”
Currently, students do not have “the same facilities or coordination tools.” Our structures mean that at this moment “the educational institution has an advantage over the singular student.”
In fact, that is what our current institutions offer for a strength, they exist “to coordinate the various functions of education.” But in the future, when we truly have an open school concept, we could well see a heretofore unheard of paradigm shift.
“In this near future world, students are the administrators,” writes Pesce. “All of the administrative functions have been ‘pushed down’ into a substrate of software. Education has evolved into something like a marketplace, where instructors ‘bid’ to work with students.
All About Control
When it comes to knowledge, the opensource, opencourseware movement is gaining ground. For Pesce, the rationale is clear and the benefits without limit.
Of technology and the internet, “The challenge of connectivity is nowhere near as daunting as the capabilities it delivers,” states Pesce. “Yet we know already that everyone will be looking to maintain control and stability, even as everything everywhere becomes progressively reshaped by all this connectivity.
“We need to let go, we need to trust ourselves enough to recognize that what we have now, though it worked for a while, is no longer fit for the times. If we can do that, we can make this transition seamless and pleasant.
“So we must embrace sharing and openness and connectivity; in these there’s the fluidity we need for the future.”
Some Thought-Provoking Work
We noted earlier that the recent Pesce posts, all of which are connected, represent the rarest of internet materials.
Like David Parry, we have read each piece at least twice. As a suggested order, we turn back to David for his suggestion for those interested in reading further:
December 21, 2008 No Comments
Using the Internet to “improperly” download copyrighted material continues to be a major issue for a number of industries. Of late, downloading has moved beyond the pleasure phase of securing one’s favorite song and into the world of college textbooks.
The issue for both industries is fundamentally the same. Distributing music or books over the Internet without permission is a violation of copyright law. At the same time, such action deprives each respective industry of significant revenues.
As one would expect, the behavior is drawing the same criticism from book publishers as it has from the recording industry. However, at this point there has been no legal push to go after individuals who have pirated copyright materials.
Textbook Prices Add to the Issue
Hiawatha Bray, reporting for the Boston Globe, recently spent some time discussing the textbook issue with Ed McCoyd, the director of digital policy at the Association of American Publishers. McCoyd indicated that textbook piracy has become particularly ‘seductive’ because of the fact that students often have extreme difficulty finding the cash to pay for academic books that often cost more than $100 per individual text.
Bray went on to quote a student who concurred that textbook pricing was one of the key factors that contributed to his willingness to download pirated materials. In addition, the same student noted that many of the listed textbooks were seldom used in class, a situation that made the purchase of expensive texts particularly troubling for students.
Illegal But Tough to Stop
Adding to the challenge for publishers is the inability to impose consequences on sites posting copyrighted materials. Federal law protects websites from potential copyright lawsuits as long as they respond to a removal request from someone holding a copyright.
Bray reported that a site called Scribd “gets at least one take-down request a day, including frequent ones from Harvard University Press and the Massachusetts Institute of Technology Press.” The site seeks to offer legal file sharing options for students but users frequently post copyrighted works.
Bray notes that with thousands of works posted daily, keeping up with the improper sharing is an ongoing challenge of epic proportions. In addition, sites like Scribd will not act on suspicion of pirated material. Action will be taken only when a publisher makes a complaint.
Bray went to the Scribd site to see what he might find. He noted numerous copyrighted works including the Gale Encyclopedia of Psychology, a $214 item. According to Bray, site data revealed more than 300 visitors had viewed the book (we could not locate the text on the site).
One Web Site Discontinued
It is likely the Bray article may have been the catalyst for Scribd taking the Gale book down. In addition, the recent publicity has seemingly led to the closure of a second site that blatantly sought to offer copyrighted materials to students.
Highlighted negatively in a Chronicle of Higher Education article a couple of months ago, the site Textbook Torrents has apparently been taken down by its web host. Right out of the gate, Textbook Torrents was promising more than 5,000 textbooks for download in PDF format.
The process was to include a free user-account with access to a free software program utilizing a peer-to-peer system called BitTorrent. Once the user had downloaded the software, access to texts was to be a snap.
As for its honoring copyright law, the site’s opening lines included:
“There are very few scanned textbooks in circulation, and that’s what we’re here to change. Chances are you have some textbooks sitting around, so pick up a scanner and start scanning it!”
According to the web site UsedBooksBlog.com , the disappearance is in direct response to publishers taking issue with the site’s intent. The site’s owner A.J. Kohn noted that he had e-mailed Dreamhost in regards to the matter. DreamHost responded to the e-mail with an acknowledgment that the site’s activity was not in keeping with the intent of copyright law.
“We received very long DMCA takedown notices from publishers of the content in question,” stated Dreamhost. “The site was further closed down due to violations of our Terms of Service due to it’s illegal facilitation of the distribution of copyrighted content without the copyright owners consent.”
A visit to the Textbook Torrents site currently yields only an error message though archived links can be found when a search engine like Google is utilized.
Ongoing Battle Looms
Adding to the challenge for textbook companies is the anonymity and the world-wide basis of the Internet. Many sites are based in foreign countries that have little support for American copyright laws.
While it is doubtful any US web host would knowingly accept a site like Textbook Torrents, such a site could reemerge in yet another format in another country. That said, what is more likely to occur is individual sharing among students, something that will be much harder for the textbook industry to track.
At the same time, Kohn offers a list of sites offering textbooks online. Kohn’s list includes Scribd and the explanation that the site is the only one in the list that allows users to upload materials, an aspect that could lead to copyrighted materials being improperly uploaded.
Reduced Costs Would Lead to Reduced Pirating
Given the prior explanations, it would seem that enforcement needs to give way to methodologies that allow such textbooks to be purchased at more reasonable levels. Creating e-editions that forgo the entire book publishing process could seemingly be one method for bringing down such costs.
Producing materials at a more reasonable cost to students would go a long way towards reducing the “seduction” aspect noted by McCoyd, helping the industry keep a better lid on the improper downloading of copyrighted textbooks.
July 24, 2008 5 Comments
In our last post we expressed our strong support for a free and open web. To try to get to the heart of the Creative Commons movement, we were fortunate to be able to interview Ahrash N. Bissell, Ph.D., the Executive Director of ccLearn (the educational division within the Creative Commons).
Dr. Bissell previously served as the Assistant Director of the Academic Resource Center at Duke University after spending time as a Research Associate in the Department of Biology at the school. Dr. Bissell also worked on data-sharing issues for interdisciplinary research and served a lead adviser for a program to bring science education opportunities to underprivileged middle-school kids.
Though Dr. Bissell began his work with the Creative Commons just last summer, he is already immersed in a variety of projects. As to ccLearn, Bissell notes that his branch of the commons organization seeks to “educate people about copyright” as well as “advocate for open education and the adoption of open educational resources.” As a means to accomplish those ends, Dr. Bissell indicates that ccLearn is also is concerned with “improving the interoperability of present and future resources in a globally interconnected world.”
As is our usual custom, we present the interview in question and answer format.
What were your personal reasons for leaving Duke University to become involved with the Creative Commons organization?
There were several personal reasons why the position at CC was attractive to me. First, I believe that CC licenses, or something like them, are the way of the future, especially in fields where the economies are based on something other than money (e.g., in academics, the economy is based on citations and exposure more than the $$ made). Second, I have always been frustrated by the fact that the tools exist to make data and information sharing extremely easy, which is a necessary prerequisite for synthetic and interdisciplinary research, and yet we are a long way from fully taking advantage of these technologies. The problems are mostly legal and cultural, and organizations like CC are at the forefront of pushing for change. Third, most of my prior work centered on the premise that education is about more than the content. In other words, students don’t sign up for classes to get access to a textbook; rather, they sign up to get access to the environment, the expertise embodied by their teachers, and to get tangible credit for their learning accomplishments. Decades of good pedagogical research have shown that our current “memorize-and-regurgitate” system of formal education is broken and is failing to achieve our national educational goals, yet changing the system is nearly impossible as long as people insist that learning is nothing more than exposure to more information. CC-licensed education materials offer a possible way to break this logjam, since by definition they are free and open to anyone. I believe this should cause educators, administrators, and learners to more carefully consider what added value is obtained by sitting in a classroom. I think there is substantial added value to classrooms, in numerous dimensions, but only if the lessons and indeed whole courses that are designed and taught thoughtfully, ever mindful about the fact that the education should be about more than the content. So my position at CC affords me the opportunity to test out these thoughts and see where they take us.
Can you briefly summarize the Creative Commons philosophy?
For the Creative Commons as a whole, the philosophy essentially boils down to empowering creators to clearly and easily enable their works to be accessed and used by others in myriad, often unforeseen, ways. The CC licenses were inspired by the sense that the current system of copyright is essentially broken, or at least quite inappropriate for modern, digitized, web-based content. But more interesting is the fact that many, many people create things that they want others to share, adapt, and otherwise engage with beyond looking at them. The simple fact is that people really should not be putting their IP on the web unless they want people to share them, and CC makes this both easy and legal. CC licenses have since spawned all sorts of creativity around a platform of sharing: new business models, different ways of communicating, community-generated media, and so on. We cannot predict what will happen with these ideas in the future, nor do we want to be in the position of dictating best or worst examples of using CC licenses. For us, the highest purpose is to allow for creative expression to proceed unfettered by the arbitrary limits of the law when people do not actually desire those limits. There are many who believe that CC is somehow anti-creator (i.e., the licenses only benefit consumers of content), but that is incorrect. First of all, all creators are also consumers, for nothing is created out of thin air. And second of all, we have never said that we believe that all things should be CC licensed; there are some ways in which CC licenses work well, and other situations in which standard copyright is likely to be more appropriate. Fortunately, creators enjoy complete autonomy in deciding what to do.
Can you give readers some sense of what the organization is hoping to accomplish and the impact it is having on the key issues related to copyright laws as well as maintaining an open digital commons accessible to all?
Clearly, in order for this capacity for creativity to be fully realized, a fully open digital commons is crucial. We are fundamentally opposed to any form of top-down control over internet usage and rights. In cases where certain types of content might need to be blocked or controlled (e.g., adult sites when kids have access to machines), those decisions and blocks should be made locally by the affected consumers, not by the media companies, the government, or anyone else. One problem with the current copyright system is that it outlaws most of the social applications of the web, which basically makes criminals of anyone who goes online. Clearly, this is silly, which is why we often point out that CC is copyright, and moreover that CC licenses strengthen respect for copyright since they enable people to do the things they believe they should be able to do anyway, thus clarifying the distinctions between those rights and the lack thereof for all-rights-reserved works.
When people talk about the concept of the Creative Commons, the issue of net neutrality comes up a great deal. In your mind, where do these concepts differ and where do they overlap?
Well, they are definitely related, as I stated above, but not otherwise fully overlapping, though this is now stepping some distance from my expertise. To me, net neutrality is usually focused on ensuring equal treatment of both file types and contents. CC simplify clarifies the conditions under which those file types and contents are made available. So, even in a non-net-neutral world, it is not clear how this would or would not affect CC-licensed materials. On the other hand, CC licenses, like the internet itself, threaten some of the businesses that make a living on distributing materials of limited quantities. These businesses still have a lot of money and tend to throw their weight around in DC (and elsewhere), so CC-licensed materials could be subject to attack if ISPs were in the position of essentially taking bids for the content that is or is not allowed into peoples’ homes. This scary thought has far greater ramifications than anything about CC per se.
In his book, Professor Lessig writes: If the resource is rivalrous, then a system of control is needed to assure that the resource is not depleted-which means the system must assure the resource is both produced and not overused. If the resource is nonrivalrous, then a system of control is needed simply to assure the resource is created—a provisioning problem, as Professor Elinor Ostrom describes it. Once it is created, there is no danger that the resource will be depleted. By definition, a nonrivalrous resource cannot be used up.
Would you take the position that the Internet (specifically, the fiber that connects the world) is rivalrous or nonrivalrous? Some argue that without regulations the Internet arteries will become more clogged than the New Jersey Turnpike during rush hour. In summary, could you address the argument that it would be better to regulate the Internet now before it gets overwhelmed?
This is definitely outside of my area of expertise, but I’ll take a stab anyway. The internet (the fiber) is a physical object, and as such it is rivalrous. However, it is more accurate to think of it, as you did, like a highway, which is understood to be a public good and is therefore built and maintained by public funds, subject not to the rules of economics (in the strictest business sense), but rather to the rules of meeting some basic benchmarks for functionality.
There is no question that there is a real danger that the internet will start to clog up as net traffic increases (some would say we’re already there), but I would not agree that regulation of different types of traffic is the answer. There are many clever ways to manage the flow of information so that peoples’ experiences with the internet are seamless, and of course we can always lay more lines. One of the nice things about the internet at the moment is that fact that everyone involved feels some responsibility for making it function as well as possible. I would worry that if we got the government involved, people would have less incentive to be creative about that aspect of the internet’s development and we would see the pace of technological innovation slow down.
Given the pressure from the Tel-coms, et al, to begin regulating the web, could you describe some potential scenarios of a regulated web? What specific applications might general users lose as a result of regulations? What potential options would those who seek to create new business opportunities through the web lose as a result of regulation?
I can only speculate in the most general of terms, though I have already experienced some frustrations in this regard with Comcast, which has already been accused of violating net neutrality. TelCom regulation is likely to result in greatly reduced capacity to view and share video and other band-width hogging file types (I experienced this myself, as I just mentioned). With restrictions on such content, some sort of bidding or backroom dealing will automatically emerge so that certain sites can avoid the restrictions. Clearly, this cannot be a neutral process, thus undermining some of the most fundamental and democratic aspects of the web. One set of regulations will spawn others, as the government will probably have to force certain network providers to provide unfettered access at certain times, but that’s fraught with uncertainties, and so on. And of course once the regulations kick in for the high-bandwidth files, the lower bandwidth files will almost certainly follow. I would expect the regulatory mindset to quickly assert control over the content as well, so that content from certain sites, countries, cultures, languages, etc., will be banned according the whims of the day. Some of this obviously happens already in places like China, and we have always railed against those policies as being profoundly undemocratic. I would hate to see us go the same route.
Can you give us a concrete example of what has happened in China?
One example is how the same Google search performed in China and in the US returns radically different results, even accounting for language and other regional differences. An obvious example that was shared was a search for “Tienanmen Square”, where a search here shows images of the protests and Chinese tanks that squashed them, whereas the same search in China reveals historical documents about the square and tourist images. Clearly, the Chinese government is not interested in net neutrality in terms of content. I doubt they are the only ones who engage in this practice, and I also doubt that such interventions are limited to the content only. It’s a slippery slope once you start imposing these top down regulations on the types of information (format or content) that can flow through the web.
Could you specify some of the greatest developments to date given a neutral web and flexible copyrights and can you suggest of some applications not yet in place that could come about if the net remains as it is right now?
I definitely do not have a good enough sense of the overall development of the web to give you a decent list. But sites like YouTube, Wikipedia, and most of the social networking (Web 2.0) sites are phenomena that were pipe dreams only a very short time ago. All of these sites depend on an unrestricted flow of data and ideas. As to somewhat related yet at the same time different forms of growth are the business and informational models that have emerged from unfettered access to copyright-free data. Examples such as weather-satellite info, DNA sequences, and so on. And while I am a skeptical of the potential of virtual environments (e.g., Second Life) to supplant real-world experiences, I believe that there is a bold future in virtual environments for training through the exposure to “forbidden” or otherwise impossible experiences. Here again, the capacity for these innovations to be truly transformative depends on a neutral web and flexible copyrights.
Would you make a different list if we did not add to the list the concept of more flexible copyrights but instead focused only on neutral web?
A neutral web seems to me to be crucial for the ongoing generation of novel technologies that leverage the capacities of the web. For example, better, faster web-based platforms for content (docs, videos, songs, etc) creation will probably become the rule rather than the exception, freeing us from our desktops. Truly mobile technologies, using mesh networks and always-on accessibility will transform the ways in which we communicate, and the types of information we presume to always have access to. Novel forms of dissent would be powerful areas of development, but almost certainly impossible if the web is not neutral. There is a lot of activity around the development of products and practices that is apart from copyright, and most of these activities would suffer in some way if the web is not neutral.
Text edited for clarity.
February 22, 2008 No Comments
We have heard many express concerns over the future of the Internet. One group postulates that an unattended digital commons is destined for the same troubles facing our over-fished oceans and our clogged highways. Others insist that without regulations the behemoths of the industry like Google and Microsoft will simply take control of the world-wide web, perhaps creating bottlenecks and other insidious or onerous forms of control.
On the other hand, we find we very much like what we see today. We are enthralled by the creativity that is demonstrated daily on a site like YouTube and enjoy the incredible breadth of opinion displayed by a new generation of writers called bloggers. We like the fact that no one controls what content is available to us and we love hearing the rags to riches stories of another successful entrepreneur who used their Internet connection, a computer and their own garage to create a world-wide business.
Are we simply in the Internet golden age? Is our current unfettered optimism of the net similar to the feelings of those who came to America to settle a new world? Most importantly, when we sit down with pleasure at the computer today are we doing so with blinders on as to what is to come?
The Tragedy of the Commons
In his famed 1968 piece, “The Tragedy of the Commons,” Garrett Hardin took the time to address a class of issues he called “no technical solution problems.” For Hardin, that constituted the group of problems that could not be solved with technological advances alone but would need moral clarifications as well (population control).
Hardin also explicitly discussed the noble goal of creating “the greatest good for the greatest number.” However, Hardin acknowledged the obvious, what is considered the optimum for one person might be “nothing but wilderness” while for another the optimum would “constitute ski lodges for thousands.”
Those two discussion points formed a critical aspect of what Hardin called the “Tragedy of the Commons.” To remedy the issue, Hardin discussed a concept of “mutual coercion that was mutually agreed upon,” i.e. a set of agreed upon regulations with consensus as to how to properly enforce them.
The Town Commons
Hardin explains his “Tragedy of the Commons in the following way. “Picture a pasture open to all. It is to be expected that each herdsman will try to keep as many cattle as possible on the commons.
“As a rational being, each herdsman seeks to maximize his gain. The rational herdsman concludes that the only sensible course for him to pursue is to add another animal to his herd. And another….
“But this is the conclusion reached by each and every rational herdsman sharing a commons. Therein is the tragedy.
“Each man is locked into a system that compels him to increase his herd without limit — in a world that is limited. Ruin is the destination toward which all men rush, each pursuing his own best interest in a society that believes in the freedom of the commons.
“The individual benefits as an individual from his ability to deny the truth even though society as a whole, of which he is a part, suffers. Education can counteract the natural tendency to do the wrong thing, but the inexorable succession of generations requires that the basis for this knowledge be constantly refreshed.
In 1968 Hardin was able to articulate the following example, one that we face today. Our “National Parks present another instance of the working out of the tragedy of the commons. At present, they are open to all, without limit” but “the parks themselves are limited in extent whereas population seems to grow without limit. The values that visitors seek in the parks are steadily eroded. Plainly, we must soon cease to treat the parks as commons or they will be of no value to anyone.”
The Digital Commons
Does the digital commons mirror this physical world? Many seem to think so.
Daniel McFadden, the winner of the Nobel Prize for Economics in 2000, writes in “The Tragedy of the Commons”that “the commons that is likely to have the greatest impact on our lives in the new century is the digital commons.” And for that new commons, according to McFadden, we now face the same issues with the digital information that our early settlers faced with the town commons and our natural parks currently face from too many visitors.
McFadden notes that “information is costly to generate and organize, but its value to individual consumers is too dispersed and small to establish an effective market.” Furthermore, “the information that is provided is inadequately catalogued and organized” meaning the Internet “tends to fill with low-value information.”
He concludes by providing four models as to how the digital commons might operate in the future so as to avoid a tragedy similar to that of the town commons. McFadden further insists that the “management of the digital commons is perhaps the most critical issue of market design that our society faces.”
But all of his suggestions leave us feeling hollow. Are we destined to have such poor options as a pay for connect ISP that controls content much like a newspaper of magazine of today? Can we not do better than an array of services that mirror the channel structure of cable television? And can’t we do better than a pay as you go system, however small, and instead give everyone access to the great equalizer, knowledge?
McFadden does acknowledge that “one of the enchanting features of the Internet over the past decade has been unabashed, free-wheeling innovation.” But he seems convinced that the digital commons is on a path similar to that of the town commons depicted by Hardin.
Perhaps to use Hardin’s analogy, McFadden seems to believe that the problems that the digital commons faces could in fact be an issue without a technical solution. Furthermore, McFadden seems to see the digital commons issues as mirroring the difficult Hardin discussions surrounding the greatest good for the greatest number. In the end, McFadden sees the concept of mutual coercion that is mutually agreed upon as a necessary step for the digital commons.
He does give some hope with the following: “The solutions that resolve the problem of the digital commons are likely to be ingenious ways to collect money from consumers with little noticeable pain, and these should facilitate the operation of the Internet as a market for goods and services. Just don’t expect it to be free.”
Maintaining a Free Digital Commons
In direct contrast, the Committee on Economic Development’s report, Open Standards, Open Source, and Open Innovation: Harnessing the Benefits of Openness, touts the success of the “commons” approach: The report notes the “benefits of openness” and insists that continued openness is critical for further growth. Perhaps most importantly, the report challenges the thinking of those who view the digital world in the same manner as that of the physical world.
Certainly consumers have to be pleased with the current digital commons. Today, when we sign on to the Internet we are able to access any information we want at the fastest available speed. Essentially we are also able to use any service we want at virtually whatever time we want to access it.
This fact is dubbed Net Neutrality and it forms the underlying basis of a free and open Internet system. The concept of Net Neutrality is deemed by many as the epitome of democracy because it is so consistent with anti-discrimination laws. Internet providers may not speed up the net for one class of citizens nor slow it for some other class. Content cannot be discriminated against based on who is the owner, the sender or the receiver.
Most importantly, under the current structure, it is the consumer who is in complete control. It is the consumer that decides what content they are interested in and what applications they wish to use. Because of the free and open Internet, it is the consumer that decides the merit of a web site or a service, not some corporation.
At OpenEducation.net, we believe the current openness of the Internet is precisely why consumers find an explosion of applications and content. While some fear the clogging of the physical Internet arteries, the continued development of the Internet appears to point to the free digital commons as providing greater good for more people.
Larry Lessig and the Creative Commons
The work Larry Lessig, author of “Free Culture” and founder of the Creative Commons seeks not only the continued push for such openness, but to break down the very barriers that limit the current innovations commons from growing even further. In particular, Lessig has begun a push that seeks to rethink copyright laws as they exist today.
Lessig notes, “Free content is crucial to building and supporting new content. The free content among the ‘wired’ is just a particular example of a more general point. Commons may be rare. They may evoke tragedies. But commons also produce something of value. They are a resource for decentralized innovation. They create the opportunity for individuals to draw upon resources without connections, permission, or access granted by others.”
Lessig insists the current concerns surrounding copyright is not one about artistic freedom and protection. It is instead about control. Lessig wants to move to a world where content authors have the ability to choose how their work is to be used. Detractors insist the current copyright law prevents piracy of an individuals work.
The Tragedy of the Digital Commons
For Lessig, the viewpoint is entirely contradictory to the views of the Tel Coms, McFadden, and the legal teams representing the corporate music giants. For Lessig, the true tragedy of the digital commons would be any move to stifle or to legislate.
In “The Future of Ideas,” Lessig refers to the creation of the web thus: “If the Web was to be a universal resource, it had to be able to grow in an unlimited way. Technically, if there was any centralized point of control, it would rapidly become a bottleneck that restricted the Web’s growth, and the Web would never scale up. Its being “out of control” was very important.”
Of the Web developers, Lessig states: “They were extremely talented; no one was more expert. But with talent comes humility. And the original network architects knew more than anything that they didn’t know what this network would be used for.”
In addition, the last thing Lessig wants to hear about is the notion of legislating because some are uncertain as to where the future of the Internet will take us. The Stanford Professor insists on just the opposite.
“In particular, when the future is uncertain—or more precisely, when future uses of a technology cannot be predicted—then leaving the technology uncontrolled is a better way of helping it find the right sort of innovation. Plasticity—the ability of a system to evolve easily in a number of ways—is optimal in a world of uncertainty.
For Lessig there is no doubt that the open digital commons is the right way to proceed. “This strategy is an attitude. It says to the world, I don’t know what functions this system, or network, will perform. It is based in the idea of uncertainty. When we don’t know which way a system will develop, we build the system to allow the broadest range of development. This was a key motivation of the original Internet architects.”
We agree with Lessig’s optimism and see the digital commons as an intellectual commons not a physical one. Keeping open access means that all of the great minds, those so-called great by society as well as those without the credentials, can tackle these issues in an intellectual manner.
And as for the potential tragedy that others insist is awaiting the out-of-control Internet, Lessig says simply:
“There is a tragedy of the commons that we will identify here; it is the tragedy of losing the innovation commons that the Internet is, through the changes that are being rendered on it.”
We could not agree more.
Next up we take the time to interview Dr. Ahrash Bissell of the Creative Commons.
February 21, 2008 3 Comments
In this video Brewster Kahle highlighted the importance of open and redundant access to information.
Project Gutenberg makes over 20,000 full books accessible online. Additional public domain books are available online via sites like Google Book Search, Creative Commons, and the Open Content Alliance.
Many published books created after 1922 are still under copyright and are inaccessible other than samples provided by services such as Amazon’s search inside this book service and Google’s Book Search. Many more books will soon come online in an ad supported format. John Wiley & Sons recently made a vast array of content avail Frommer’s, For Dummies, and Cliff Notes available online in an ad supported format. The WSJ reported that the sites make about $5 million dollars each.
Making content open and accessible is one of the quickest ways to gain relevancy and steal marketshare from older players, many of whom will start moving their catalogs online in an ad supported format in an attempt to stay relevant. As more content becomes freely available online the value of creating more content drops unless it garners significant attention or is associated with a trusted brand.
July 11, 2007 No Comments
Cory Doctorow recently spoke at Google about the corrosive effects of IP protection, international trade agreements, and copyright law that treat society as a group of criminals until proven otherwise. He also discussed how lawsuits against consumers did not provide any income to the artists the lawsuits allegedly helped to defend.
June 1, 2007 No Comments