For Halloween this year, my 5 year-old daughter Olivia chose to be Wilber the pig from her favorite book right now, Charlotte’s Web. We’ve discovered that they simply do not make pig costumes for 5 year-old girls, and so Lori pulled together pink leotards and tights and outbid another desperate parent for the only two sets of pig ears and tail on eBay (unfortunately winning both, but war is war), and I fashioned a pig nose out of a toilet paper roll and the elastic from a party hat. Still we’re thrilled her costume is from literature rather than a cartoon.
Prior to reading the book to Olivia, I’d only ever seen the cartoon version, and it’s been a real joy getting to read it with her. E.B. White has a tremendous ear and it’s a book that was simply made to read aloud. A sample:
The barn was large. It was very old. It smelled of hay and it smelled of manure. It smelled of the perspiration of tired horses and the wonderful sweet breath of patient cows. It often had a peaceful sort of smell–as though nothing bad could happen ever again in the world. It smelled of grain and of harness dressing and of axle grease and of rubber boots and of new rope. And whenever the cat was given a fish head to eat, the barn would smell of fish. But mostly it smelled of hay, for there was always hay in the great loft up overhead. And there was always hay being pitched down to the cows and the horses and the sheep.
The barn was pleasantly warm in winter when the animals spent most of their time indoors, and it was pleasantly cool in the summer when the big doors stood wide open to the breeze. The barn had stalls on the main floor for the work horses, tie-ups on the main floor for the cows, a sheepfold down below for the sheep, a pigpen down below for Wilbur, and it was full of all sorts of things that you find in barns: ladders, grindstones, pitch forks, monkey wrenches, scythes, lawn mowers, snow shovels, ax handles, milk pails, water buckets, empty grain sacks, and rusty rat traps. It was the kind of barn that swallows like to build their nests in. It was the kind of barn that children like to play in. And the whole thing was owned by Fern’s uncle, Mr. Homer L. Zuckerman.
It’s a book I enjoy reading as often as Olivia enjoys hearing, and far better than the stilted prose of most children’s fiction.
So here’s the end product of a process that has inspired quite a number of posts. A while back, we undertook a really exhaustive review of possible publication formats for MIT OCW. We receive some criticism for our use of PDF as our basic publication format. Especially among educational technologists, PDF is viewed as being a relatively inflexible as compared to XML-based approaches. And it is. But given our constraints (publishing existing materials authored in a variety of formats and publishing all of MIT’s courses), it quickly becomes the logical choice.
The short version of why is that with PDF, our level of effort is basically linear according to the number of documents we publish. The conversion from other formats is reliable, requires little QA, and is one step for the entire document. There are no reliable tools to convert to XML that also convert formulas (and most of our materials have formulas) so if you accept that all formulas will be images only, then using an XML converter means our level of effort becomes linear according to number of pages, as there is still proofing and formatting required. If you want the formulas in XML as well (which is the really cool and useful part of XML) then the level of effort is linear according to the number of formulas, because you have to hand code them.
Of course, having materials authored in XML to begin with would solve the problem, but it’ll be a while ’till that happens at MIT. Anyway, here’s the report. In MS Word (incidentally the most preferred format of MIT OCW visitors). Please do let me know if you have comments.
Here’s an event not to be missed: Open University UK launched OpenLearn today. Fantastic content and really great tools to support independent learning.
The IIEP forum is back in action, this time with a discussion of the relationship between OER and FLOSS. This follows on a discussion over on the FLOSS forum about how lessons learned in the FLOSS world might be useful in the OER context. Claude Martin provides this summary of lessons learned on the FLOSS side:
During the recent FOSS Community discussion we considered what lessons the OER movement might be able to take from the FOSS movement. We would like to share the ideas that were discussed in the FOSS Community. The lessons learned are grouped into the following categories:
1) OER and FOSS are complementary
2) OER development can mirror and take advantage of the FOSS collaborative model
3) FOSS can promote creation of OER content in developing countries
4) OER developers should commit to open licenses
5) Managing OER content design and editing is easier than FOSS programming
6) More inclusive formats for document exchange should be used
7) FOSS can support better searching of OER
8) FOSS can ease concerns over perceived technical demands of OER development
9) There are differences between OER content and FOSS software
There were a number of follow-on comments discussing differneces between FLOSS and OER (quality control was raised early) and so I wanted to throw in my two cents (ok, a good nickle’s worth at least, but I am wordy by nature):
One basic difference [between FLOSS and OER] I think might be helpful to point out early on is an observation Yochai Benkler made about MIT OCW in The Wealth of Networks:
“As an intervention in the ecology of free knowledge and information and an act of leadership among universities, the MIT initiative was…a major event. As a model for organizational innovation in the domain of information production generally and the creation of educational resources in particular, it was less significant.”
In OER more significantly than in FLOSS, the production and distribution aspects of open sharing can be disaggregated. As Benkler correctly points out, MIT OCW represents grafting of an open distribution mode onto a traditional production mode. Typically in a FLOSS project production and distribution are typically tightly intertwined. The open distribution is what supports iterations (and thus production) by a wide community. Their are certainly great examples of this happening in OER as well–Connexions comes to mind–but open sharing and open production need not necessarily occur together in OER (nor of course in FLOSS–the IBM patent releases are one example). But because there is less economic incentive for faculty to retain copyright of educational materials than there is for traditional software producers to control ownership of their products, there is the possibly that open sharing of traditionally produced content might become the norm in academic practice, rather than the exception, as IBMs case appears to be. The looser control of IP in OER as opposed to software (i.e. in the US ideas can’t be copyrighted, only expressions–more or less) also allows for looser connections between production and distribution. In the content realm, I can borrow from traditionally produced and fully copyright protected works at the idea level, so long as I don’t borrow the expression.
As we discuss OER in terms of the FLOSS experience, I’d suggest it’s worth keeping this difference in mind. Educational content will likely always be produced in a wide range of ways, including the traditional single-faculty paper or digital document approach; collaborative approaches involving multiple faculty or faculty and instructional designers; collaborations between loose-knit groups of learners; and many others. The FLOSS experience certainly points to new and exciting production modes, and I think we all hope many will bear fruit. But it may require somewhat nuanced approaches to understanding production-related issues. In principle I think we all agree that the most flexible possible formats ought to be used to reduce the time required for repurposing content, but asking faculty widely to change the way they produce their content is asking them to assume a whole new level of production effort. In the US we are fortunate to have skilled and talented educators throughout our higher education system. At big research universities, state schools, small private colleges and community colleges there are many many talented faculty creating content appropriate to very different contexts. One question to consider is do we want only those willing to invest the extra time and effort to learn to produce in new formats (or to collaborate with those who understand the formats) to share their content? Or do we, at this stage in the development of OER, just want to encourage as many people to share as much educational content as possible, regardless of format?
I don’t want to get too deep into the format question, as I’m using it as an example of a complication raised by the looser connection between production and distribution in OER. Benkler’s observation is one that impacts OER on many other levels, so it’s one I believe deserves some thought as we begin the discussion.
For as long as there’s been an MIT OCW, we’ve been talking about the “unified process” of educational content creation and management. In its latest iteration, “educational content lifecycle,” it’s being discussed as the start-to-finish process through which educational materials are created (mostly by idiosyncratic faculty), used for instruction (often in an LMS), integrated with other materials (via library electronic holdings), ported to an open publication (that’s us), archived (DSpace, anyone?), and (hopefully) reused for the next generation of educational content.
There are clear reasons why OCW would be interested conceptualizing and examining this lifecycle–if we can convince faculty to make OCW-friendly decisions upstream, then our publication process gets simpler. The developers of the Institute LMS (Stellar) have an interest in understanding how to drive adoption of the tool. Faculty (may) want to make content creation as easy as possible through reuse of existing content. But for the most part, the discussions of the ECL (god help me, another acronym) revolve around the notion of cost savings with little mention of to what ends.
The one that gets no mention at all, but I think needs careful consideration, is that much of the process is designed to shield the institution from IP risk. In large part, LMS use for classroom-based courses is about having a safe space to provide IP restricted course materials online. Authentication systems providing access to electronic library reserves are another layer of IP control in this chain. OCW publication processes are largely a game of weeding out IP-protected materials. In fact, when viewed as an end-to-end process, the ECL seems as much about IP as anything else. Maybe that’s just my perspective.
So, if part of the game is cost savings in the process, then one question to ask is are these technology tools the appropriate way to manage IP risk? Especially if (as I’ve heard rumored) they aren’t really able to do the job. Might not there be cultural or legal strategies that might be more cost effective? Not sure I have anything to propose, but I can’t shake this line of thinking…
Once again, the folks at USU’s COSL group put on a fantastic conference last week, one that I will be digesting for some time. Mia Garlick, Creative Commons’ General Counsel lead things off with a great talk about the impact (or non-impact) of the non-commercial clause on open educational resources, making a number of interesting points. She pointed out that the NC clause really wasn’t the sticking point between CC licensed materials and Wikipedia, one of the oft-cited examples. The licenses have other incompatibilities that prevent remixability.
I was most interested in her explanation of what non-commercial really means, which reinforces what I’ve understood previously: it essentially says that if you’re not sure your use is non-commercial, you really ought to ask. Many people object to this situation, saying it’s little better than full copyright if you are going to be imposing this level of transaction cost on any use that might potentially be considered commercial, but I don’t agree. I think over time, sites or groups like the Consortium can build list of affirmative statements of acceptable use, clearing an increasing number of use cases from transaction costs while still retaining a measure of control over how materials get used.
For instance, on tOFP, I specifically note that use at non-profit educational institutions that charge tuition is acceptable. It’s in the interest of the materials owner (in this case me) to keep the transaction costs down on those uses I’m not concerned with. I don’t want every instructor looking to use the materials in a class to contact me—it is a drain on my time, since I’ll always say yes. But I do want someone working for a for-profit writing workshop to contact me before using the materials. I may approve the use, but I want to have a look at how it’s going to be used first. As I come across other clear areas where I can make a blanket statement on the site about acceptable use, I’ll continue to include them.
The most compelling argument I’ve heard in the whole discussion is that the NC clause limits the possibilities for redistributing the materials offline in developing regions. Companies looking to distribute educational materials on CD for instance need to at least recover production and operating costs, and the NC clause might prohibit this. But again, it comes down to simply asking. I haven’t met anyone producing OER that feels that if the materials were distributed in Sub-Saharan Africa on CD, they distributor shouldn’t be able to recover costs. I do know of some who want to review potential distributors and have safeguards in place to ensure the materials get to end users at cost, however, and there are I’m sure some distributors I wouldn’t be comfortable with.
Are the transaction costs too high in this scenario? Perhaps, but that may be where umbrella organizations such as the OCW Consortium can play a role. With one entity representing large chunks of content, the amount of communication and deliberation can be significantly reduced, and institutions sharing content can pool oversight resources. There are certainly challenges around the NC clause, but it doesn’t yet appear to be the poison in the well that some seem to suggest.
From David Wiley’s forthcoming review of learning object literature, a description of perhaps the earliest formulation of a learning object system, suggested by Ted Nelson in the ’60’s:
The Xanadu design, which describes Nelson’s ideal hypertext system, calls for all content to be archived in a fixed, uneditable manner. Whenever a user desires to make changes to a piece of content previously stored in the system, those changes are stored separately, and users have ongoing access to both versions of the document. (The modern Connexions system developed at Rice University – http://cnx.org/ uses a similar system.)
Because a specific version or historical view of a specific document is guaranteed to exist in a specific location in perpetuity, it is possible to reuse portions of documents in Xanadu by reference. For example, if an author wanted to quote a portion of an existing document in a new document, instead of cutting and pasting the text into the document the author would reference the specific starting and stopping locations in the existing document, and the content from that existing document will be rendered dynamically in the new document whenever the new document was rendered. (This functionality is currently available as the open source Xanadu Transquoter – http://transliterature.org/transquoter/.)
Issues of granularity and context that plague current designers and reusers of learning objects are completely and elegantly sidestepped. Rather than requiring authors to design and build content with future reuse in mind, breaking their content into chunks, etc., in the Xanadu approach authors simply create and publish their content as they see fit. Other authors who desire to reuse portions of the content later on simply indicate the section of the existing document they wish to reuse, and this section is rendered dynamically within the new document later. Also, issues of context of learning objects are also completely avoided, as readers of the new document can always navigate back to the original document from which the snippet came, in order to better understand the context of the learning object. (This functionality is currently approximated in the Purple system – http://www.eekim.com/software/purple/purple.html.)
What’s not mentioned here is the way this system would impact the thorny licensing issues that have emerged in the OER world. Since the reused content is being rendered unedited here, the reuse is a compilation work, not a derivative, and each reused bit can carry with it whatever license it was made available under.
And while I won’t claim to be nearly as bright as Ted Nelson, I can’t help but hear echos of the linking based or blogosphere model for OER reuse I’ve been noodling on. Once again, I find myself on the cutting edge of fifty-year old thinking.
While I am clearly remiss in noting the event, Notre Dame recently launched their OCW site. Congratulations to Alex Hahn and Terri Bays there for what has to be a record implementation. If I remember correctly, work on the project didn’t actually start unitl after the new year.
Notre Dame’s OCW also marks a significant milestone for the Consortium, one of the first big implementations built from the start on “reused” technologies, most significantly USU’s eduCommons and to a lesser extent the workflow database we use at MIT. I hope this marks the start of an easier and more cost-effective technology path for other projects. Kudos to USU’s John Dehlin as well for helping the Notre Dame team get eduCommons up and running.