Monthly Archives: September 2014

Customer relationship management (CRM) as a paradigm in distance education

I’ve been trying to gain more insight into the changes that are coming to Athabasca University with the new contact centre model that I discussed in an earlier post. After finding AU’s report, Evaluating the relative efficiencies and effectiveness of the contact centre and tutor models of learner support at Athabasca University, I think I have a better idea of what’s going on.  AU is adopting a customer relationship management system, similar to those that are used to run the call centres of large businesses.  These systems have been adapted for use in higher education over the past decade.

The report outlines AU’s system as follows:

Under the Contact Centre model, undergraduate student advisors, available six days a week, field initial queries via fax, telephone, or e-mail, and act as the first point of contact for accessing other advising services. Using flexible, shared, and secure contact databases, contact centre advisors handle issues for which they have established answers and refer course-related inquiries to the appropriate academic experts. … “Frequently-asked question” databases are also available to students and advisors to answer some academic queries. If applicable, students are referred to faculty and part-time academic experts for academic assistance.

The report refers to a keynote address by James Taylor of the University of Southern Queensland at the 2001 ICDE World Conference on Open Leaning and Distance Education. In the address, Taylor describes the USQ’s e-University project which seeks to automate the delivery of information to students. Taylor explains that tutors’ responses to students’ questions are added to a database. Subsequent students’ queries are first run through the database to see if the question has already been answered. If so, the student is provided with that information. At the time of Taylor’s keynote address, tutors were involved in vetting the automated responses, but Taylor anticipated that this would soon become unnecessary. It is only when the answer does not exist in the database that a tutor is required to interact with a student, and as he put it,

As the intelligent object databases become more comprehensive, enabling personalized, immediate responsiveness to an increasing number of student queries, the institutional variable costs for the provision of effective student support will tend towards zero.

By this he means that regardless of enrollment numbers, the costs will remain the same because students will access the database rather than needing attention from tutors. The irony is, the more dedication and care that tutors put into answering questions, the more they hasten their own obsolescence.

The AU report emphasizes this outcome:

Most importantly, individually-tailored services can be provided to an increasing number of learners with the same economic resources by using knowledge management software to reduce the need for direct, human interaction in the teaching and learning process.

The AU report takes issue with the idea that student-teacher interactions are necessary. They point out that, according to the equivalency theorem of Anderson (2003), only one of the following types of interaction is required: student-teacher, student-student, or student-content. As long as one of these is done well, the others can be eliminated entirely with no negative consequences for the student. I wonder at the wisdom of leaving a group of students to their own devices, sans teacher or content, as a method of education, but perhaps I’m missing some nuance of the scenario.

My question as I read through the report was, “Does this work?” The report was written to present the results of a survey of students who had taken part in the initial roll-out of the contact centre model at AU. The data presented are a list of attributes of the contact centre and tutor models, and students’ ratings of the importance of those attributes.  I’ve summarized the results below.  Keep in mind that although the report says the survey asked about the importance of each of these attributes, some of the attributes read as though students were asked to rate the outcome of their interaction with the model in question.  I’ve kept the original wording so you can decide for yourself what the survey items mean.

Study results

Even if it were clear whether the survey were evaluating the perceived importance of or the perceived satisfaction with various attributes, it still wouldn’t answer my questions.

One thing I would like to know is how students felt about having an advisor with no specific knowledge of the course material answering their academic (i.e., course-matter specific) questions by referring to a database. The survey reports (item 2) that 76% of the 300 students sampled rated “Importance of talking directly with an academic expert for academic support” as “Important” or “Very Important.” I wonder if that number would have been even higher if it had said “communicating directly” rather than “talking directly.” Very few of my students “talk” to me because the vast majority communicate by email.  (The prevalence of email could also have affected the outcome of item 3.) More importantly, this item doesn’t answer whether or not students were happy with the amount of direct communication they had with an academic expert under one model or the other.

The report does not address how beneficial students felt either model to be in terms of their learning outcomes, and it does not provide any metrics such as differences in grades or retention. The closest it comes is addressing satisfaction with response times for academic assistance using each model (item 8). Read literally, the results are students’ rating of how important satisfaction is in this regard (i.e., how important it is that response times be satisfactory), but it is possible that students were actually asked how satisfied they were with response times.  Regardless, response time is not the same thing as help with learning.

Because this report did not tell me what I wanted to know, I spent the better part of a day searching for studies of similar systems, and the outcomes for learners.  Much to my chagrin, the only relevant thing I found was a paper by Coates et al. (2005) stating that there were no generalizable studies addressing this issue. The paper was very interesting nonetheless, and foreshadowed the present developments:

While ‘academic-free’ teaching may seem only a very distant prospect, major online delivery ventures already have business plans based on the employment of a limited number of academic staff who create content with the support of larger numbers of less expensive student support staff.

It also echoed my main concern: “What are the consequences of students increasingly seeking learning assistance from technology support staff rather than from teachers?”

The AU report concludes that there was no material difference in students’ satisfaction with response times between the two models, and that “[m]eans of first contact seems [sic] to be more effective under the Contact Centre model.” (If the last statement is based on item 4, it would appear the opposite is true.) Because there was a savings of over $60 for each student with the Contact Centre model,

Taken together, these results suggest that satisfactory educational experiences can be delivered under either model. Given this equivalency of outcomes, it is recommended that relative costs should primarily determine how student support is provided at Athabasca University.

After reading this report, my thoughts were (almost simultaneously) that response times are a dubious measure of how satisfactory an educational experience is, and that this point is likely moot for decision-making purposes at this stage. But maybe distance education is like the garment industry: at one time, it was a foregone conclusion that your clothes would be made to fit you. Now, most people aren’t particularly troubled by having to pick a ready-made garment from a clothing rack. It’s still a shirt, right?  Why should getting answers from a database/ advisor instead of from a teacher be any different?

In case you are wondering, yes, I do find the idea of purging humans from teaching to be disturbing. Aside from losing the human interactions and creative challenges that make teaching a meaningful undertaking, there are serious flaws in a system where students’ interactions with a course cannot be observed by someone who will ultimately be responsible for redesigning that course. In my present roles as tutor and course author, I have a very good idea of what is working and what isn’t, because of conversations I’ve had with my students. Sure, there are issues that commonly arise, and one could infer from the number of questions on a particular topic that something isn’t working. But if it were that easy to fix, I would have intuited the problems with my approach to begin with and avoided the issue altogether. It is only by communicating back and forth with students and asking specific questions that I can figure out exactly what’s going on. There is a diagnostic element in my relationship with students, and it is a crucial element for assisting students on a one-to-one basis, and for improving the course in general. The contact centre model is about removing the human element as far as possible.  (Wow!  At one time that statement would have been entirely facetious!)  If I were to participate in this system, I would have exactly one try to figure out what was going on, before my canned response would be distributed to all future students with a similar question. This model will result in lost access to valuable data, and these are data that can’t be recovered by a standard end-of-course evaluation by students.

There are some broader issues that I didn’t see addressed in this report, or in any of the promotional materials I read about customer relationship management software use in higher education, or in reports by administrative branches of different schools about the benefits of implementing these systems. What about confidentiality, for example? How are queries dealt with that could identify a particular student? What happens if students share personal information in their question? Does all of this become available for other students to access?

What about intellectual property rights? If students attach a file containing their own work when they query the system (which they can do), could their work be kept as an example when the question and response are added to the database? Who would use their work, and how?

What happens if there is an error in a canned response? What if I say “increase” when I meant “decrease,” or “north” when I meant “south?” If the advisor who is deciding on which response the student should receive doesn’t have the background to know the difference, will the error ever be caught? Or will it rear its ugly head in perpetuity?

What happens if we learn something new that changes everything? Will this system trap the old understanding forever, like an insect in amber? That is perhaps the most chilling outcome of handing off the job of teaching to an entity unable to think critically about the information it dispenses.


See what the students think: Instructional Model Survey

Categories: Distance education and e-learning, The business of education | Tags: , , , , , , | 2 Comments

Syllabus strategy: The Quick Start Guide

The manual for my coffee maker starts on page two with the words “IMPORTANT SAFEGUARDS” in very large print. This section cautions me against touching the hot parts, or swimming with the coffee maker, and also advises “Do not place this appliance… in a heated oven.”

The first part of page three lists what not to do to the carafe (“Avoid sharp blows…”). The second half concerns the dangers of extension cords, and ends by offering the helpful advice that if the plug won’t fit in the electrical outlet, you should turn it over so the big prong goes in the big hole.

Page 4 is the Table of Contents, page 5 covers features (“Comfort Fit Carafe Handle,” “Lighted ON/OFF Indicator”), and finally on page 8 it gets around to the coffee-making process.

There are 17 pages, and the only part I ever paid much attention to is the instructions for cleaning the coffee maker.  I only looked those up because the coffee maker has a “Clean Cycle” button, suggesting that my usual non-automated procedure might not apply.

So, let’s review. I haven’t read the manual for my coffee maker cover-to-cover because:

  1. I deemed the first several pages as not useful to me, and I concluded that the majority of the manual was likely to be that way.
  2. I know what I’m looking for, so I quickly scanned the manual to find those details, and filtered out everything else.
  3. It is 17 pages long.

I suspect that these points also sum up the reasons why my students won’t read the course syllabus. I haven’t electrocuted myself [with the coffee maker], so my assessment in point #1 was likely a reasonable one.  Not so for my students who don’t read the syllabus.

The ones I’m most concerned about are taking introductory physical and historical geology courses through the Centre for Continuing and Distance Education (CCDE) at the University of Saskatchewan. Their syllabi describe procedures that are unique to the distance education format, such as having to submit an application to write the final exam. More than one student has assumed that he or she could simply show up at the right time and place, and be permitted to write the exam, as with on-campus classes.

Syllabus for Geology 108/121

My syllabus design

For my face-to-face classes, I’ve designed a syllabus to address point #1 by putting the details that students are most likely to look for (e.g., textbook, grading scheme, contact information) as close to the beginning as possible. I’ve addressed point #2 by using sidebars with interesting images, facts, and quotations, to disrupt the scanning process. As for point #3, my syllabus is seven pages long, and that was a very tight squeeze.

For my CCDE courses, the CCDE puts together most of the syllabus following a modified version of the U of S syllabus template. They specify what information I am to supply, and indicate where I have the option to make additions or modifications. Whatever isn’t on the list stays as is. This arrangement allows me to add content, but it does not permit the kinds of modifications that I think are necessary to address points #1 and #2. The syllabi are 13 and 29 pages long, so there’s no help for point #3.

Quick start guide

The Quick Start Guide

These syllabi are not working, and I’m not allowed to fix them.  I fumed about this for a while, and then came up with an idea. Back when computer hardware still came with paper manuals, manufacturers often included a quick start guide. These were posters or pamphlets that showed simply and clearly the most basic steps needed to get up and running. I decided that my syllabus needed a quick start guide.

The quick start guide I came up with has some key features:

  • Fonts and layout that invite browsing, including images, plenty of white space, and text blocks of limited size
  • A place for key information (dates, contact information, assignment submission procedures) that is scattered throughout the syllabus
  • Details that are too important to leave to a chance encounter in the syllabus
  • Motivation to read the syllabus, including a “Top Ten Reasons to Read the Syllabus” list. The list combines humour with items in the syllabus that students usually ask about.

It is two pages long, so printable on a single sheet of paper.   It doesn’t look like any of the other course materials, and this is good, because curiosity motivates inquiry far better than obligation does. I’m trying it out for the first time this term, so we’ll see how it goes.


Categories: Challenges, Syllabus | Tags: , , , ,

Blog at