October 20, 2008
The SCORM 2.0 workshop in Pensacola
The SCORM Workshop held by LETSI (Learning Education Training Systems Interoperability) is over, and some clear direction emerged from the blizzard of whitepapers, informal submissions and comments over the last few months. I was very impressed by how fast they moved things forward in a few days.
The design process will be driven by use cases generated by the people who actually use SCORM applications in their work: Instructional designers, administrators, teachers, and other strategic adopters all over the world. This is significantly different from the way SCORM was originally designed, by a small community of LMS vendors and the U.S. Department of Defense, one of the BIG USERS of training and tracking.
There was a lot of acknowledgement of the fact that we don't just want to track or "interoperate" web-based interactions, but transactions that could occur just about anywhere, including simulations, instructor-led, or instructor-guided, mobile, disconnected, etc. Fuzzy human-requiring (or at least AI-requiring) interactions should not be excluded. A key take-away is that we can't limit functions to what currently exists - learners will be learning in ways we can't even imagine.
Of course backward compatibility is crucial - many of us who want more, more more also have thousands of old-school SCORM courses in our libraries that we do not want to have to revamp to a new standard.
The use of Web Services and a Service Oriented Architecture is likely in the new standard. This will (hopefully) facilitate interoperability, and the ability to modularize applications. There was some question about whether this architectural approach has been a success in other areas, and a lot of discussion about what the business case for this might be, but those questions are in the process of being answered
Also discussed was the fact that people are out there choosing to learn from many sources, social, non-authoritative, non-standard, web-based, informal, random, and there is currently no way to track or analyze data about what they are doing or how they are doing that.
Every aspect of the current SCORM standard was examined closely and will continue to be. A useful refresher on the current basic assumptions of SCORM and a suggestion for a new conceptualization of what SCORM is and should be were given in a whitepaper by Allyn Radford
Regarding SCORM 2.0, Radford suggested an approach where SCORM would support three separate domains which would remain agnostic to each other: Content, Communications, and Learning, Education and Training (LET) Support.
From his white paper:
"SCORM can be conceptualized and described in many ways. After the last few weeks of papers and interaction and seemingly conflicting requirements in some areas I now find it useful to think of SCORM as having the potential to serve diverse community needs through a focus in three separate 'domains' under which most other requirements can be categorized."
"..the design of the infrastructure and applications within it are declared out of scope where SCORM is concerned but the communications between applications/systems for the purposes of meeting LET requirements are in scope. By way of example, SCORM should not be dictating how a repository should store and manage content but it should provide for interaction between a repository and a front-end application. It could be said that cross domain scripting became a problem because content got mixed up with communications..."
During the workshop, the "ility" Reusability was reexamined. What exactly do people mean by it? Do we still want it? At what level should content be resusable? The individual asset? The whole SCO? What constitutes a SCO anyway?
The working group on Sequencing organized the submissions they had received into 3 general conceptual groups:
- Sequencing functions should be moved to the content developer's domain, within the SCO or within the manifest.
- There is still value to be had with sequencing being handled by the LMS, but the current spec is bad and should be replaced. The goal would be a rules-based sequencing engine controlled by the LMS, which would allow content developers to author sequencing rules using a finite defined set o primitives.
- A big change in architecture needs to be made to make sequencing workable. Papers suggested a new, layered approach where the higher levels allow instructional designers to work directly with sequencing, and a set of reusable object oriented components to handle higher order sequencing functions.
The group considered a possible dual solution which extends the current data model to fully accomodate giving control over to the content devloper, but also creating a simple rules-based engine to be used by the LMS for those that prefer that type of workflow. I could see very different types of tools being developed to take advantage of these options.
How can you take part in shaping the new SCORM?
Look for LETSI at:
Your input is being requested on use cases, functionality, prioritization, etc.
So, if you use SCORM, or think your organization may use SCORM in the future, stand up and be counted!