Sometimes I feel like I should just quote all of Wealth of Networks, beginning to end (which, thanks to the open license, I could), but here’s one bit in particular that resonates:
The emerging businesses of the networked information economy are focusing on serving the demand of active users for platforms and tools that are much more loosely deigned, late-binding–that is optimized only at the moment of use and not in advance–variable in their uses, and oriented toward providing users with new and flexible platforms for relationships.
I suppose you could find support for any view on OER in this, but let me give you my take: Educators adept in operating within the emerging information environment on the web are not likely to want to turn existing OER into finished products, in part because teaching doesn’t lend itself well to rigid advanced preparation (a humanities view I’m sure, by more on this in a minute) and because students in the new information environment aren’t going to want a finished, polished, circumscribed product either.
From the educator perspective, teaching is often a process that is “loosely deigned” and “late-binding.” Within many domains, there are gross predictions about student needs that will hold up–what language they’ll learn best in, whether (especially in formal circumstances) they have taken prerequisites. But even at that level, fluency and level of retention from prerequisites are highly variable, and educators need late-binding materials to account for these variabilities. This suggests that advanced localization beyond the highest levels may miss the mark. This is probably less true in domains such as math, physics and music, which largely employ universal notation and are less ambiguous in their meaning structures.
Even in those domains, though, students used to accessing just-in-time information on the web are likely to feel too constrained by overly-produced materials. Rather than receiving the completed picture, educators may find more success in connecting the dots as best they can, but still pointing to the source of the dots used in original context. This allows learners to follow those connections to the other situations in which the content was used, and seek out the approaches that make the most sense to them.
for both me and the blog. Catching up on a few things and an upgrade at my hosting company took down WordPress for a while, but things are more or less back in order now.
Here’s the latest on CC license use distribution. As with previous data they’ve released, NC licenses are still a majority of those out there. There seems to be a shift of five or six percent in the last year and a half or so, though this could be measurement error. More impressively, they’ve jumped from 45 M linkbacks to 140 M linkbacks (again possibly measurement error). Great to see the pool of openly licensed materials expanding regardless of flavor.
I doubt there will be a more eloquent expression of the economic and information-related aspects of producer culture than the second chapter (PDF) of Yochai Benkler’s Wealth of Networks. If you want to read how someone who really knows what they are talking about says it, read this. The whole book, by the way, is available on his site under a CC BY-NC-SA 2.5 license.
It’s interesting to read Bhagwati and Benkler back to back. Bhagwati argues that the market forces behind globalization are generally benign, with a few unsavory side effects. He points out that NGOs are a very effective means of controlling these side effect, more effective in most cases than trade sanctions. NGOs in doing so participate in interesting ways in the non-market production of information goods, the rise of which Benkler traces.
I’ll keep you posted as to whether I come up with anything truly profound in following this angle, but in the mean time, it seems to suggest that open educational resources might be part of a rising tide of nonmartket information practices similar to the effects of NGOs that help control some of the ill effects of globalization, as I have described before.
One of the most rewarding aspects of my job is the daily contact with people from all over the world who are passionate about doing good things. I can count on probably two fingers the number of people I’ve met in the States and abroad through the project who aren’t just absolutely hell-bent on positive change. Amid the chaos of violence and hatred that seems to dominate the news, this passion continually refreshes my hope that the world may yet be a better place for my children. Yes, there are debates about the best way to effect change, but these debates all occur within a shared framework of passion for a better world, and I see them all as steps in the right direction.
Which is a long-winded way of saying I appreciate Joseph Wang’s passion for open sharing, but I’m frankly not sure I can keep up with him and keep my day job! See his posts here, here, here and here. Anyway, I’m happy to keep up a dialogue, but unless we can bring a little more focus to it, I’m not sure I can respond effectively. I’d like to hear carefully reasoned support for the repeated assertion that the NC clause kills mass collaboration, as I’m still not convinced on that point. Why for instance, would Wikipedia have been less successful had they included an NC clause? Are there comparisons of NC vs. non-NC projects out there that demonstrate this effect?
Here’s the one piece of evidence I’ve heard on the subject, from David Wiley writing on the IIEP forum list:
It should be intuitively obvious that the more restriction-options included in a license, the less freedoms the license presents the end user. What may not be immediately obvious, however, is that license selection behavior by users increases as the number of license restrictions increases. In other words, almost no people want to reserve no rights, and almost all people want to reserve all rights. This is hypothesis is empirically verifiable and, at a high level, the behavior is very stable:
So, providing CC-By as a default license does create the greatest freedom for users downstream, but very few people are apt to adopt this license. Providing By-NC-SA provides slightly fewer freedoms to users downstream, but many more people are apt to adopt this license. In a recent survey of Flickr discussed in the paper above, there were 1,212,885 photos licensed By-NC-SA and only 338,543 licensed By. A correlation of photos licensed with all licenses between October 2004 and August 2005 shows that selection behavior is *highly* stable (r = 0.997) even though the total number of photos licensed grew from 81,090 to 3,466,052 during this period.
The result? If we’re going to choose one license and force participants to use it…we appear to have a choice between a very few, very free contributions and a larger number of less free contributions.
Passion is an essential ingredient for open sharing, yes, and I agree with a great many of the points made by Joseph. But I do think there’s a need for careful reasoning and examination of evidence in all of this, as I’m just not seeing the damage that’s being described. If anything, the above suggests that Wikipedia would be even more successful with an NC clause. In the end, this may simply be a question of whether you value contributors’ rights over the rights of others to use the contribution. Perfectly willing to accept this may be a matter of perspective.
A weekend for posts, I guess. I’ve just finished In Defense of Globalization, and can now feel free to turn to Wealth of Networks, but before I do, one last quote on the dangers of interdependence from this otherwise strong advocate of globalization:
…a “selfish hegemon” such as the United States, reflecting its own lobbies’ agendas, pushed for a common, coordinated policy of excessive intellectual property protection at the WTO. In short, a socially harmful policy may be imposed, under the pretext of coordination…, by powerful nations in an interdependent world. It is useful to remember that interdependence is a normatively attractive, soothing word, but when nations are unequal, it also leads to dependence and hence to the possibilities of perverse policy interventions and aggressively imposed coordination of policies with outcomes that harm the social good and the welfare of the dependant nations while advancing the interests of the powerful nations.
A rare weekend post to note that tOFP is one year old yesterday! The site wecomed approximately 20,000 visits last year. Good stuff.
To return to the first aspect described from my notebook page, one reason the remix view gets so much traction is it resonates conceptually with two very powerful emerging models for content creation, Apple’s rip-mix-burn philosophy and wikis—especially the oft cited model of Wikipedia. I’m a big fan of both, generally, and see them as keystones of the emerging producer culture. In application to OER, though, I do think these conceptual models present complications, both because they are seductively powerful models and also because they (as is any application of one conceptual model to another field) are somewhat imperfect. The lego model for learning objects is another of these less-than-perfect conceptual models.
I’ve written before about the problems in applying the rip-mix-burn metaphor the creation of learning materials, but I suppose I should do so in a less impassioned and more reasoned way. Of course it is possible to use sampling and remix to create a new artistic whole, and possible to provide really effective critique and commentary by while doing so. But course materials typically operate more like an expository essay, where support is woven into an extended logical argument. The process behind this involves careful, systematic movement thinking through the argument, and the insertion and critique of evidence at appropriate places.
Educators (at least most of the ones I know, who are in almost all in humanities fields) must think through their materials at a very detailed level, and almost always on paper. This virtually necessitates rewriting, in one’s own words, the arguments and evidence used by others, as a process for understanding the materials. This is not to say that there’s no room for rip-mix-burn in the process of creating educational materials. Of course there are things that can be usefully picked up and dropped into a new set of course materials. It’s just not the beginning and end of the process.
Wikis also provide a seductive and imperfect model for the creation of course materials. Especially with the success of Wikipedia, there is tremendous interest in the creation of collaboratively developed course materials, and I don’t doubt that there will be such materials created. Careful thought about the differences between wikipedia and course materials, though, illustrates the limits of this model in its application to OER. Wikipedia is a tool for collaboratively coming to consensus about meaning. The project rests on the tension between competing views of topics, and that tension drives contributors to advocate for their view of the “real truth.” Pages either reach equilibrium as competing factions settle their differences, or pages get locked down while meaning is debated.
The key here that the Wikipedia model is great for achieving consensus, but less useful for structuring and supporting a multiplicity of divergent views in a coherent fashion. Education depends importantly on the ability to support iconoclastic thinking, and what is iconoclastic in one era becomes canonical in the next. The wikipedia model risks shutting the most innovative voices out of the creation of OER.
Another more practical reason is that, as Eric von Hippel has described in Democratizing Innovation, the wider an audience a product is designed for, the less satisfied–on average–any one person in the audience will be. Educational materials are designed to support very specific interactions of educators, students and curricular structures, each of which impose their own idiosyncratic needs on the materials. Coming to broad consensus around educational materials is likely to dilute their effectiveness in any one specific set of circumstances.
So, what else?
These two models, imperfect as they are, obviously provide insight into some important ways that OER use will develop. Nearly half of the educators coming to the MIT site reuse materials, and two thirds of them combine those materials with materials from other sources, and this is a critically important use. I’m less familiar with good examples of collaborative courseware development, but I’m sure they’re out there. But these two conceptual models, which tie OER use to emerging web technologies and practices, capture peoples’ imaginations far more than does the rather pedestrian idea of “reference,” which by comparison sounds like a dotty old librarian blowing dust off a book.
So what I’d like to do is propose a third forward-looking model, also imperfect, that provides a way of thinking about OER reference use in the context of emerging web tools and practices: OER as the “educational blogosphere.” This is not a model proposed to replace either rip-mix-burn or wikis as ways of thinking about OER any more than blogs themselves replace wikis or remix tools in the Web 2.0 world. The idea rather is to provide an additional way of thinking about OER that can help people envision reference as the valuable practice it appears to be from what I’ve seen.
The educational blogosphere
The idea is that, in some circumstances, it makes more sense for an educator to point at a resource (or provide a paper version of a resource without modification) rather than to grab it and digitally or physically edit it. For as much traction as the idea of localization gets, it seems pretty obvious, for for an educator, if a resource is close enough to her needs, she doesn’t actually have to change it to use it, she just provides students with a little on-the-fly context and off they go. This is analogous to the practices that have grown up around the use of blogs. I don’t edit the cool thing out there on the web I’ve found, I just point to it in a post that also provides my take or the reason why it’s relevant to what else I have to say.
Instead of depending on a few highly editable resources (as does the rip-mix-burn model) this relies on a wide range of resources that don’t necessarily need to be completely flexible from an editing perspective. If every school in the world is publishing their materials in a really simple format, chances are I’ll be able to find a near approximation to my needs that doesn’t require too much contextualization. Not only that, but in pointing to the resource in situ, I am allowing the learner to see the argument for which the resource was originally created, plus potentially (extending the model to include trackbacks) links to a whole range of ways in which the resource has been used. In other words, in a model that supports a multiplicity of divergent views.
So I’ll shut up now
I’ll re-make the point for emphasis that I don’t think the rip-mix-burn metaphor or the wiki metaphor are wrong, or any less perfect that the blogosphere model for that matter. But I do think that they all three have equal validity, and that each can lead projects to very different (and successful) decisions about a range of issues such as licensing and publication format. But those are topics for about a hundred other posts.