What we talk about when we talk about automated assessment
If there is one single innovation driving the xMOOC* phenomenon, it’s the emergence of scalable automated assessments. The ability to provide feedback to thousands of students at once is a big part of what makes these courses scalable. A robust peer learning community is another aspect of this, but one for a later discussion. Anyway, to my non-technical self, there are three predominant flavors of these automated assessment:
• “Check yourself” kinds of quiz questions, often randomized in some way to try to control for cheating. These seem so far to be the assessments in the Udacity course I am taking.
• Simulations such as the circuitry sandbox used for 6.002x, which allow for open ended manipulation of variables. While these kinds of assessments are more sophisticated, the underlying technologies seem to be more one-off and to require more development effort than the “check yourself” tools.
• True adaptive learning environments along the lines of those used by Carnegie Mellon’s Open Learning Initiative. I know OLI is not usually discussed in the xMOOC conversation, but everything I understand about the program indicates they should be. These seem to be a whole nother level of complicated above simulations.
That’s my assessment of the assessments. Would love to hear others’ takes.
• So I am adopting Stephen Downes’ conventions, xMOOC for the Coursera/Udacity/MITx varient, and cMOOC for the original connectivist model.
I’m more and more of the opinion that we should replace “emergence” with “expectation”.
“the emergence of scalable automated assessments”
Having learned a little more about the assessment systems for many of the major MOOC players, I am inclined to agree.
[…] generally rely on lectures and frequent assessments. While MOOCs do encourage communities of students to participate and work with each other to learn […]
[…] up^ Carson, Steve. “What we talk about when we talk about automated assessment” 23 July […]