A Methodology
Broadly speaking, my approach to learning assessment systems design and development is iterative, feedback driven, and never “done”. I begin with getting a better understanding of the intended target audience(s) of learners, teachers, and assessors so that I can have a better sense of the shape of the system and how it might change over time. I try to avoid assuming that a particular set of methodologies or theories of learning will work just because they have before. Some call this a kind of “eclectivism” approach to learning and assessment systems design.
Much of what we consider learning is inherently social, and it is often helpful to frame the arena of intended teaching and learning as taking place within any number of communities of practice, or whatever terminology we may wish to use to describe a particular set of people with particular roles in a particular functional group—and how we understand how these people need to change the way they work (practice) in order to demonstrate desired outcomes of growth or improvement.
This social approach to communities (of practice) also serves as a foundation for maintaining an inclusive and empathetic approach to design for learning, with a focus on identity, participation, non-participation, legitimate peripheral participation, modes of belonging (engagement, alignment, imagination), negotiation, and boundary brokerage as fundamental parts of interpersonal communication and interaction for individual and collective improvement (through practice).
Simply speaking, all of learning (and teaching) is purposed communication, both interpersonal and intrapersonal.
Furthermore, my standard intentional approach across all teaching and learning methodologies, regardless of learner age, ability, etc. is that learning and assessment are one and the same. One cannot learn without assessment in some form.
A Detailed Approach to Learning Assessment Systems Design
Over twenty years of practice, I have refined a workable model for learning assessment systems design (LASD). This begins with a negotiation of learning intentions: goals, objectives, and outcomes. Outcomes are what we expect people to do to show they are meeting objectives, aligned with goals. These shift as we learn more about what we really want to teach. And, of course, this is where something like Bloom’s revised taxonomy of learning allows us to create objectives and outcomes that are active, verb-based descriptions, which thus can be demonstrated by learners partially, or fully, with some level of repetition as practice over time if necessary.
As we consider the definitions of objectives and outcomes, it is helpful to keep the evaluation-assessment-measurement (EAM) chain at front of mind. We cannot measure learning. We can, sometimes, make good assessment decisions about learning activities. We can evaluate whether or not we think learning has occurred, meeting our defined goals. To successfully evaluate, we must make good assessment decisions. To make valid assessment decisions, we must use reliable measurement instrumentation which gives us useful data as we need it. This same approach applies regardless of whether humans alone are making decisions, or if many of these “lower order” assessment decisions are delegated to any number of algorithms.
So, the trick is: what’s the best way to define those objectives and outcomes so they are observable and measurable—so we’ll know what it really means when a learner conducts activities (responses, solutions) that demonstrate partial or total completion? Furthermore, what are the best sets of experiences we can design to give our learners the opportunity to demonstrate all our intended outcomes over time—perhaps in the order we’ve decided they should be demonstrated?
Regardless of technologies used for delivery of these experiences, we can conceptualize each experience using a “four space” problem-tool-solution-response (PTSR) model for how learners show evidence of progress toward outcomes and objectives. We can align these four spaces with the real world modes that learners will be in while they are expected to perceive and practice the knowledge, skills, and abilities associated with the objectives and outcomes we’ve defined.
I prefer to use the Structured Observations of Learning Outcomes (SOLO) Taxonomy to work with subject matter experts to create agreed upon structures, or hierarchical progressions, of observable learning outcomes. This helps us all to define what improvement actually looks like. This process can start as early as possible, since it can often be useful to begin ranking the difficulty or sophistication of outcomes as we refine the definitions of what they really represent.
If the intended learning experience delivery platform allows for necessary data collection, it is also quite helpful to design learning assessment measurement data structures using xAPI, also known as Tin Can, previously evolved as SCORM. This actor-verb-object (or “person did thing”) approach to data structures is beneficial in many ways, especially since it is standardized and simultaneously human- and machine-readable. Again, working with subject matter experts, I like to create thousands of xAPI statements: micro and macro “events” occurring throughout any individual or collaborative learning journey, all directly aligned with established hierarchies of outcomes (and objectives and goals).
These xAPI statements define the expected data of measurement, which is the fuel of unobtrusive assessment, which allows for individualized feedback for learners across multiple time scales, with relevant frequency. In other words, we design a system intended to watch quietly, let someone learn through practice, and then intervene as necessary to help them understand how they are doing (and why).
Once we understand the best way to use measurement and assessment to generate appropriate feedback (human and algorithmic) to individuals and groups of learners, then we can understand the best fit for delivery modalities as curriculum, instruction, and learning tasks as experiences.
Then, we align these experiences with the available delivery modalities, and consider expanding modality options, especially if we find such expansion to be critically necessary in the current phase of design, development, and implementation of systems components.
We start building hybrid (networked and offline) experiences and media.
Then: we test and refine.
We finish the round of development and production, and we implement the agreed deliverables in version-controlled fashion.
We observe the performance of people, machines, and information as actors in our system.
We maintain and repair the system as necessary.
We improve the system regularly, at agreed upon intervals.