One of the interesting items of feedback we received on the Academic Toolbox Renewal site went like this:
“The criteria are exceedingly thin on pedagogical guiding principles. No effort seems to be put into the adequate mapping of degree learning expectations to the various points where adoption of technology could have an impact (positive/negative).”
The feedback was anonymous (which is fine), so I am unable to thank the person for raising the concern, other than to respond via this blog post.
In fact, a great deal of conversation has been happening about the extent to which we should include pedagogical criteria in the list of Toolbox Common Criteria that has been developed.
Overwhelmingly, the opinion has been that pedagogical criteria are not within the scope of this exercise because pedagogical criteria are highly contextual to the discipline/subject being taught and the instructional design being used to teach it. For example, as stated in the feedback, “mapping of degree learning expectations” is highly variable, and can take the form of competencies, graduate attributes, critical thinking skills, etc. etc.
Ultimately it is the academic freedom, and responsibility, of instructors/programs/department to set those contextual pedagogical criteria and decide on whether a particular technology will have a positive (or negative) impact on the teaching and learning.
The goal of the Academic Toolbox Renewal exercise is to develop an enterprise ecosystem that would allow instructors/programs/departments to make those decisions and choices, and then get their choices deployed, so long as they meet the Common Criteria (such as protecting personal information, intellectual property, accessibility, etc.), keeping in mind that a particular tool or technology might be good in one context but not in another, but we still need to make sure that the tool meets certain core criteria regardless.
However, notwithstanding the opinion that pedagogical criteria were not part of the scope of the renewal exercise, there was general consensus that we should have pedagogical “values” present in the Common Criteria, as much for our own community as for vendors. Thus, Criteria P came to be, which reads:
“Can the solution provider provide research into the pedagogical value of the solution?
The instructional decision and the assessment of pedagogical value related to the use of a particular solution is at the discretion of our instructors/departments, however, solution providers should be able to demonstrate independent research that shows that the intended use of a tool is grounded in education theory and pedagogical intentions, i.e., evidence-based pedagogical benefits. The educational value of a tool should be explicit and relate to the needs of users. It is acceptable to consider that not all tools will be appropriate in all contexts, nor for all users, nor for all learning objectives and outcomes.”
The intent of this Criteria is to ensure that instructors/programs/departments have the necessary information to make informed pedagogical decisions, based on their own teaching objectives, instructional design, and learning expectations.
Again, thanks to the person who provided the feedback. If you are willing, I’d be happy to chat more about it (or to chat with anyone else who is interested). Drop me a note via the feedback form, or by email.