I’ve been trying to gain more insight into the changes that are coming to Athabasca University with the new contact centre model that I discussed in an earlier post. After finding AU’s report, Evaluating the relative efficiencies and effectiveness of the contact centre and tutor models of learner support at Athabasca University, I think I have a better idea of what’s going on. AU is adopting a customer relationship management system, similar to those that are used to run the call centres of large businesses. These systems have been adapted for use in higher education over the past decade.
The report outlines AU’s system as follows:
Under the Contact Centre model, undergraduate student advisors, available six days a week, field initial queries via fax, telephone, or e-mail, and act as the first point of contact for accessing other advising services. Using flexible, shared, and secure contact databases, contact centre advisors handle issues for which they have established answers and refer course-related inquiries to the appropriate academic experts. … “Frequently-asked question” databases are also available to students and advisors to answer some academic queries. If applicable, students are referred to faculty and part-time academic experts for academic assistance.
The report refers to a keynote address by James Taylor of the University of Southern Queensland at the 2001 ICDE World Conference on Open Leaning and Distance Education. In the address, Taylor describes the USQ’s e-University project which seeks to automate the delivery of information to students. Taylor explains that tutors’ responses to students’ questions are added to a database. Subsequent students’ queries are first run through the database to see if the question has already been answered. If so, the student is provided with that information. At the time of Taylor’s keynote address, tutors were involved in vetting the automated responses, but Taylor anticipated that this would soon become unnecessary. It is only when the answer does not exist in the database that a tutor is required to interact with a student, and as he put it,
As the intelligent object databases become more comprehensive, enabling personalized, immediate responsiveness to an increasing number of student queries, the institutional variable costs for the provision of effective student support will tend towards zero.
By this he means that regardless of enrollment numbers, the costs will remain the same because students will access the database rather than needing attention from tutors. The irony is, the more dedication and care that tutors put into answering questions, the more they hasten their own obsolescence.
The AU report emphasizes this outcome:
Most importantly, individually-tailored services can be provided to an increasing number of learners with the same economic resources by using knowledge management software to reduce the need for direct, human interaction in the teaching and learning process.
The AU report takes issue with the idea that student-teacher interactions are necessary. They point out that, according to the equivalency theorem of Anderson (2003), only one of the following types of interaction is required: student-teacher, student-student, or student-content. As long as one of these is done well, the others can be eliminated entirely with no negative consequences for the student. I wonder at the wisdom of leaving a group of students to their own devices, sans teacher or content, as a method of education, but perhaps I’m missing some nuance of the scenario.
My question as I read through the report was, “Does this work?” The report was written to present the results of a survey of students who had taken part in the initial roll-out of the contact centre model at AU. The data presented are a list of attributes of the contact centre and tutor models, and students’ ratings of the importance of those attributes. I’ve summarized the results below. Keep in mind that although the report says the survey asked about the importance of each of these attributes, some of the attributes read as though students were asked to rate the outcome of their interaction with the model in question. I’ve kept the original wording so you can decide for yourself what the survey items mean.
Even if it were clear whether the survey were evaluating the perceived importance of or the perceived satisfaction with various attributes, it still wouldn’t answer my questions.
One thing I would like to know is how students felt about having an advisor with no specific knowledge of the course material answering their academic (i.e., course-matter specific) questions by referring to a database. The survey reports (item 2) that 76% of the 300 students sampled rated “Importance of talking directly with an academic expert for academic support” as “Important” or “Very Important.” I wonder if that number would have been even higher if it had said “communicating directly” rather than “talking directly.” Very few of my students “talk” to me because the vast majority communicate by email. (The prevalence of email could also have affected the outcome of item 3.) More importantly, this item doesn’t answer whether or not students were happy with the amount of direct communication they had with an academic expert under one model or the other.
The report does not address how beneficial students felt either model to be in terms of their learning outcomes, and it does not provide any metrics such as differences in grades or retention. The closest it comes is addressing satisfaction with response times for academic assistance using each model (item 8). Read literally, the results are students’ rating of how important satisfaction is in this regard (i.e., how important it is that response times be satisfactory), but it is possible that students were actually asked how satisfied they were with response times. Regardless, response time is not the same thing as help with learning.
Because this report did not tell me what I wanted to know, I spent the better part of a day searching for studies of similar systems, and the outcomes for learners. Much to my chagrin, the only relevant thing I found was a paper by Coates et al. (2005) stating that there were no generalizable studies addressing this issue. The paper was very interesting nonetheless, and foreshadowed the present developments:
While ‘academic-free’ teaching may seem only a very distant prospect, major online delivery ventures already have business plans based on the employment of a limited number of academic staff who create content with the support of larger numbers of less expensive student support staff.
It also echoed my main concern: “What are the consequences of students increasingly seeking learning assistance from technology support staff rather than from teachers?”
The AU report concludes that there was no material difference in students’ satisfaction with response times between the two models, and that “[m]eans of first contact seems [sic] to be more effective under the Contact Centre model.” (If the last statement is based on item 4, it would appear the opposite is true.) Because there was a savings of over $60 for each student with the Contact Centre model,
Taken together, these results suggest that satisfactory educational experiences can be delivered under either model. Given this equivalency of outcomes, it is recommended that relative costs should primarily determine how student support is provided at Athabasca University.
After reading this report, my thoughts were (almost simultaneously) that response times are a dubious measure of how satisfactory an educational experience is, and that this point is likely moot for decision-making purposes at this stage. But maybe distance education is like the garment industry: at one time, it was a foregone conclusion that your clothes would be made to fit you. Now, most people aren’t particularly troubled by having to pick a ready-made garment from a clothing rack. It’s still a shirt, right? Why should getting answers from a database/ advisor instead of from a teacher be any different?
In case you are wondering, yes, I do find the idea of purging humans from teaching to be disturbing. Aside from losing the human interactions and creative challenges that make teaching a meaningful undertaking, there are serious flaws in a system where students’ interactions with a course cannot be observed by someone who will ultimately be responsible for redesigning that course. In my present roles as tutor and course author, I have a very good idea of what is working and what isn’t, because of conversations I’ve had with my students. Sure, there are issues that commonly arise, and one could infer from the number of questions on a particular topic that something isn’t working. But if it were that easy to fix, I would have intuited the problems with my approach to begin with and avoided the issue altogether. It is only by communicating back and forth with students and asking specific questions that I can figure out exactly what’s going on. There is a diagnostic element in my relationship with students, and it is a crucial element for assisting students on a one-to-one basis, and for improving the course in general. The contact centre model is about removing the human element as far as possible. (Wow! At one time that statement would have been entirely facetious!) If I were to participate in this system, I would have exactly one try to figure out what was going on, before my canned response would be distributed to all future students with a similar question. This model will result in lost access to valuable data, and these are data that can’t be recovered by a standard end-of-course evaluation by students.
There are some broader issues that I didn’t see addressed in this report, or in any of the promotional materials I read about customer relationship management software use in higher education, or in reports by administrative branches of different schools about the benefits of implementing these systems. What about confidentiality, for example? How are queries dealt with that could identify a particular student? What happens if students share personal information in their question? Does all of this become available for other students to access?
What about intellectual property rights? If students attach a file containing their own work when they query the system (which they can do), could their work be kept as an example when the question and response are added to the database? Who would use their work, and how?
What happens if there is an error in a canned response? What if I say “increase” when I meant “decrease,” or “north” when I meant “south?” If the advisor who is deciding on which response the student should receive doesn’t have the background to know the difference, will the error ever be caught? Or will it rear its ugly head in perpetuity?
What happens if we learn something new that changes everything? Will this system trap the old understanding forever, like an insect in amber? That is perhaps the most chilling outcome of handing off the job of teaching to an entity unable to think critically about the information it dispenses.
See what the students think: Instructional Model Survey