Plagiarism-proof assignments: The Up-Goer Five Challenge

up_goer_fiveOk, so there’s probably no such thing as a plagiarism-proof assignment, but I think I’ve got a reasonable approximation thereof.

It originated with my frustration with the perpetual struggle to have students in my distance education classes answer questions in their own words. My students are using their textbooks to answer questions, and many seem to feel that a textbook is the exception to the rule when it comes to plagiarism. Some simply don’t understand that they’re doing anything wrong. From experience, I can tell you that many people who are not my students also see it that way, and complaining about it is a great way to be branded as unreasonable. The problem, as I’ve documented before, is that students who copy from their textbook also tend to fail the class. After last term, I’ve decided that it’s in my best interest to consume alcohol before grading assignments. I’m not allowed to ignore plagiarism, but what I don’t see

Absent blissful ignorance, the only way to deal with plagiarism (without causing myself a variety of problems) is to change the assignments so that plagiarism isn’t possible. Now, if you’ve attempted to do this, you know it isn’t easy. A search online will give you tips like having students put themselves in the position of a person experiencing a historical event, and explaining their perspective on the matter. That’s something students (most likely) can’t copy from the internet. But suggestions like that are not especially helpful when the topic is how volcanoes work. (Although now that I think about it, “Imagine you are an olivine crystal in a magma chamber…”)

The solution came from my online source of comfort, xkcd. Randall Munroe, the creator of the webcomic, set himself the challenge of labeling a diagram of NASA’s Saturn 5 rocket (Up Goer Five) with only the 1000 most commonly used words in the English language. Soon after, members of the geoscience community took up the challenge of explaining their fields of research in the 1000 most commonly used words. Here are two examples from a blog post by hydrogeologist Anne Jefferson. Anne writes:

” So I decided to see if I could explain urban hydrology and why I study it using only the words in the list. Here’s what I came up with:

I study how water moves in cities and other places. Water is under the ground and on top of it, and when we build things we change where it can go and how fast it gets there. This can lead to problems like wet and broken roads and houses. Our roads, houses, and animals, can also add bad things to the water. My job is to figure out what we have done to the water and how to help make it better. I also help people learn how to care about water and land. This might seem like a sad job, because often the water is very bad and we are not going to make things perfect, but I like knowing that I’m helping make things better.

Science, teach, observe, measure, buildings, and any synonym for waste/feces were among the words I had to write my way around. If I hadn’t had access to “water”, I might have given up in despair.

But my challenge was nothing compared to that faced by Chris, as he explained paleomagnetism without the word magnet:

I study what rocks tell us about how the ground moves and changes over many, many (more than a hundred times a hundred times a hundred) years. I can do this because little bits hidden inside a rock can remember where they were when they formed, and can give us their memories if we ask them in the right way. From these memories we can tell how far and how fast the rocks have moved, and if they have been turned around, in the time since they were made. It is important to know the stories of the past that rocks tell, because it is only by understanding that story that we really understand the place where we live, how to find the things that we need to live there, and how it might change in the years to come. We also need to know these things so we can find the places where the ground can move or shake very fast, which can be very bad for us and our homes.”

Is that brilliant, or what?! To make it even better, Theo Sanderson developed a text editor to check whether only those words have been used. This is what happened when I typed part of the introduction to the chapter on volcanoes:

Up-Goer Five text editor

Yes, fortunately it has the word “rock.”

I decided to test-drive this with my class. I gave them the option of answering their assignment questions in this way. It’s difficult, so they got bonus points for doing it. A handful attempted it, and that was probably the most fun I’ve ever had grading assignments. If you’d like to give this kind of assignment a shot, there are a few things to keep in mind:

  • Students (and colleagues) may be skeptical. Explain that the exercise requires a solid knowledge of the subject matter (in contrast to paraphrasing the textbook) and is a very effective way for students to diagnose whether they know what they think they know. In my books, that gives it a high score in the learning per unit time category.
  • The text editor has some work-arounds, like putting single quotes around a word, or adding “Mr or “Mrs” in front of a word (e.g., Mr Magma). Head those off at the pass, or you’ll get “But you didn’t say we couldn’t!”
  • You may wish to allow certain words for the assignment or for specific questions, depending on your goals. For example, if I were less diabolical, I might consider allowing the use of “lava.” The other reason for not allowing “lava” is that I want to be sure they know what it means. In contrast, I probably wouldn’t make them struggle with “North America.”
  • Make it clear that simple language does not mean simple answers. I found that students tended to give imprecise answers that didn’t address important details. I don’t think they were trying to cut corners- they just didn’t think it was necessary. If I were to do this again I would give them a rubric with examples of what is and isn’t adequate.
  • Recommend that they write out the key points of their answers in normal language first, and in a separate document, and then attempt to translate it.
  • Suggest that they use analogies or comparisons if they are stuck. For example, Randall Munroe refers to hydrogen as “the kind of air that once burned a big sky bag.”
  • Make the assignment shorter than you might otherwise, and focus on key objectives. Doing an assignment this way is a lot of work, and time consuming.
  • And finally, (as with all assignments) try it yourself first.

In that spirit:

I like to make stories with numbers to learn what happens when things go into the air that make air hot. Very old rocks from deep under water say things that help make number stories. The number stories are not perfect but they still tell us important ideas about how our home works. Some day the number stories about how old air got hot might come true again, but maybe if people know the old number stories, they will stop hurting the air. If they don’t stop hurting the air, it will be sad for us because our home will change in bad ways.

Categories: Assessment, Challenges, Distance education and e-learning, Learning strategies, Learning technologies, Teaching strategies | Tags: , , , , , | Leave a comment

Of Dogs and Collective Agreements

This post is a kind of public service announcement for sessional lecturers at the University of Saskatchewan, so if you aren’t especially interested in labour relations at the U of S, you might want to come back another time. On the other hand, if you prefer a data-based approach to cynicism, then read on…

Once upon a time there was a union newsletter that said the following:

“Members who have taught more than 10 x 3 credit units should be paid at level 2 rates; those who have taught more than 20 x 3 credit units should be paid at level 3 rates. Adjustments should be made automatically by the employer.

Courses taught while on regular faculty appointments or while on an ASPA contract, including as facilitator for an online course, should count in your progression through the levels, but it may be necessary to inform Human Resources of this part of your teaching experience.” [emphasis original]   

“Hey!” the sessional lecturer said, “the majority of my work is through ASPA as a facilitator, and I must have accumulated enough credit units to get past level 1 by now. I’d better check.”

So the sessional lecturer added up her credit units and found that she had surpassed the requirement for level 3 pay rates. She double-checked her employment records, and confirmed that she was actually paid at level 1 rates.

“I’d better look into this,” she said. “It must be an oversight by HR.” And so the emailing began.

The sessional lecturer contacted HR only to find that they weren’t sure about whether the ASPA work counted, and she began to doubt her understanding of the newsletter. They said they would check and get back to her. Two months later she got the news: she would be changed to level three as of the new year.

“That’s great!” she said. “But that means some of my earlier work should have been paid at level 2 or level 3. Will I be compensated for that?”

“Of course!” said HR. “It’s in the collective agreement, and we value our employees, so we will take care of that right away!”

No, HR didn’t say that. If they did, this wouldn’t be much of a story.

What they actually said was, “Well no, we don’t do that. And besides- we don’t actually check ASPA records unless someone asks. You didn’t ask us soon enough to check that our records are in order, so we don’t have to pay you. It’s in your collective agreement. You should have read it.”

The sessional lecturer was speechless. She thought to herself, “The agreement says they have to count ASPA work, but they choose not to check on it unless someone raises the issue… that’s not at all what I understood from the newsletter. I’d better read the collective agreement to see if it actually says that’s ok.”

So she made a cup of tea, and curled up with two dogs and her computer, and prepared to slog through pages and pages of legalese. To her surprise, the agreement wasn’t difficult to read at all. She hit paydirt right in the Definitions section:

SERVICE POINTS provide a measure of the teaching performed as an employee at the University of Saskatchewan and are used to determine the appropriate basic stipend. Each service point represents six credit units of teaching as the principal instructor of a credit course or courses and may include, but is not limited to, teaching as:

 1) a sessional lecturer,

2) an applied music instructor (See Article 16.04),

 3) a member of faculty in a term position as set out in Article 14.01, or,

 4) an administrative or professional staff member at the University of Saskatchewan

Sessional lecturers who have accumulated up to five (but not equal to five) service points will be paid at a Level I rate; sessional lecturers who have accumulated five and up to ten (but not equal to ten) service points will be paid at a Level II rate; and sessional lecturers who have accumulated ten or more service points, and retired faculty members appointed as sessional lecturers, will be paid at a Level III rate.”

“It’s right there!” she said. “Number 4 on the list refers to ASPA work. I wonder why it took so long for them to decide that it counted?”

Then she thought, “I wonder if HR was right about not having to pay me.” She read the collective agreement, read it again, and then put down the computer. She turned to her dog and said, “I just don’t see it. I don’t see anything anywhere.” Her dog said, “That’s odd. Scratch my ears?”

In a feat of remarkable dexterity, she patted one dog’s head, rubbed the other’s tummy, and shook her head all at the same time. “I can’t believe it,” she said. “Maybe something elsewhere says otherwise, but everything I can find suggests that USask is UScrewing me.”

Suddenly she stopped rubbing and patting- an appalling thought had occurred to her: “If sessional lecturers think HR is keeping track of their ASPA work, but HR has made a point of not doing it unless they are asked to… If HR doesn’t have to pay anyone if they avoid checking for long enough… that’s a system designed for UScrewing!”

Brought back to reality by prods from two cold noses, the sessional lecturer resumed her patting and rubbing. She sorted through her options, and concluded that if the University were not troubled by the ethics of its system, it was a hopeless cause. She thought back to a blog post she had read about a self-respect threshold, and then got up to make another cup of tea.

After evicting a dog from her spot on the couch, she settled in to read again, this time with her copy of Trading for Canadians for Dummies. She smiled.

 

Epilogue

You may wonder if the sessional lecturer ever contacted her union. That’s what her dogs recommended. In fact, she did, but she got the impression that they would prefer she went away quietly.

When she explained this to her dogs, one put down the tennis ball she was chewing and said, “So the words you were reading before don’t actually mean what they say? People words are confusing.” Her other dog began to wonder whether people words like “sit” and “stay” were also open to interpretation.

Sensing the potential for chaos, the sessional lecturer answered, “It depends on who the people are and why they say the words.” The tennis ball connoisseur put down her ball again. “That makes no sense at all. But then again, I’ve never had a collective agreement.” A pensive look came across the sessional lecturer’s face. “Maybe I haven’t either.”

 

 

Categories: The business of education | Tags: , , , , , , | Leave a comment

Time: The final frontier

Timefleet Academy logo: a winged hourglass made of ammonites

A logo begging for a t-shirt

Here it is: the final incarnation of my design project for Design and Development of Educational Technology– the Timefleet Academy. It’s a tool to assist undergraduate students of historical geology with remembering events in Earth history, and how those events fit into the Geological Time Scale. Much of their work consists of memorizing a long list of complicated happenings. While memorizing is not exactly at the top of Bloom’s Taxonomy (it’s exactly at the bottom, in fact), it is necessary. One could approach this task by reading the textbook over and over, and hoping something will stick, but I think there’s a better way.

I envision a tool with three key features:

  • A timeline that incorporates the Geological Time Scale, and “zooms” to show events that occur over widely varying timescales
  • The ability to add events from a pre-existing library onto a custom timeline
  • Assessments to help students focus their efforts effectively

Here’s an introduction to the problem, and a sketch of my solution. If your sensors start to detect something familiar about this enterprise then you’re as much of a nerd as I am.

Timefleet Academy is based on the constructionist idea that building is good for learning. Making a representation of something (in this case, Earth history) is a way of distilling its essential features. That means analyzing what those features are, how they are related, and expressing them explicitly. Ultimately this translates to the intuitive notion that it is best to approach a complex topic by breaking it into small digestible pieces.

Geological Time Scale

This is what you get to memorize.

As challenging as the Geological Time Scale is to memorize, it does lend itself to “chunking” because the Time Scale comes already subdivided. Even better, those subdivisions are designed to reflect meaningful stages (and therefore meaningful groupings of events) in Earth history.

There is an official convention regarding the colours in the Geological Time Scale (so no, it wasn’t my choice to put red, fuchsia, and salmon next to each other), and I’ve used it on the interface for two reasons. One is that it’s employed on diagrams and geological maps, so students might as well become familiar with it. The other is that students can take advantage of colour association as a memory tool.

Assessments

Assessments are a key difference between Timefleet Academy and other “zoomable” timelines that already exist. The assessments would come in two forms.

1. Self assessment checklists

These allow users to document their progress through the list of resources attached to individual events. This might seem like a trivial housekeeping matter, but mentally constructing a map of what resources have been used costs cognitive capital. Answering the question “Have I been here already?” has a non-zero cognitive load, and one that doesn’t move the user toward the goal of learning historical geology.

2. Drag-and-drop drills

The second kind of assessment involves drill-type exercises where users drag and drop objects representing events, geological time periods, and dates, to place them in the right order. The algorithm governing how drills are set would take into account the following:

  • The user’s previous errors: It would allow for more practice in those areas.
  • Changes in the user’s skill level: It would adjust by making tasks more or less challenging. For example, the difficulty level could be increased by going from arranging events in chronological order to arranging them chronologically and situating them in the correct spots on the Geological Time Scale. Difficulty could also be increased by placing time limits on the exercise, requiring that the user apply acquired knowledge rather than looking up the information.
  • The context of events: If drills tend to focus on the same group of events, the result could be overly contextualized knowledge. In other words, if the student were repeatedly drilled on the order of events A, B, and C separately from the order of events D, E, and F, and were then asked to put A, B, and E in the right order, there could be a problem.

The feedback from drills would consist of correct answers and errors being indicated at the end of each exercise, and a marker placed on the timeline to indicate where (when) errors have occurred. Students would earn points toward a promotion within Timefleet Academy for completing drills, and for correct answers.

Who wouldn’t want a cool new uniform?

How do you know if it works?

1. Did learning outcomes improve?

This could be tested by comparing the performance of a group of students who used the tool to that of a control group who didn’t. Performance measures could be results from a multiple choice exam. They could also be scores derived from an interview with each student, where he or she is asked questions to gauge not only how well events are recalled, but also whether he or she can explain the larger context of an event, including causal relationships. It would be interesting to compare exam and interview scores for students within each group to see how closely the results of a recall test track the results of a test focused on understanding.

For the group of students who have access to the tool, it would be important to have a measure of how they used it, and how often. For example, did they use it once and lose interest? Did they use it for organizing events but not do drills? Or did they work at it regularly, adding events and testing themselves throughout? Without this information, it would be difficult to know how to interpret differences (or a lack of differences) in performance between the two groups.

 2. Do they want to use it?

This is an important indicator of whether students perceive that the tool is helpful, but also of their experience interacting with it. Students could be surveyed about which parts of the tool were useful and which weren’t, and asked for feedback about what changes would make it better. (The option to print out parts of the timeline, maybe?) They could be asked specific questions about aspects of the interface, such as whether their drill results were displayed effectively, whether the controls were easy to use, etc. It might be useful to ask them if they would use the tool again, either in its current form, or if it were redesigned to take into account their feedback.

Timefleet in the bigger picture

Writing a test

All set to pass the test of time

Timefleet Academy is ostensibly a tool to aid in memorizing the details of Earth history, but it actually does something more than that. It introduces students to a systematic way of learning- by identifying key features within an ocean of details, organizing those features, and then testing their knowledge.

The point system rewards students for testing their knowledge regardless of whether they get all of the answers right. The message is twofold: testing one’s knowledge is valuable because it provides information about what to do next; and testing one’s knowledge counts as progress toward a goal even if you don’t get the right answers every time. Maybe it’s threefold: if you do enough tests, eventually you get a cape, and a shirt with stars on it.

Categories: Assessment, Learning strategies, Learning technologies | Tags: , , , , | 2 Comments

Building assessments into a timeline tool for historical geology

In my last post I wrote about the challenges faced by undergraduate students in introductory historical geology. They are required to know an overwhelming breadth and depth of information about the history of the Earth, from 4.5 billion years ago to present. They must learn not only what events occurred, but also the name of the interval of the Geological Time Scale in which they occurred. This is a very difficult task! The Geological Time Scale itself is a challenge to memorize, and the events that fit on it often involve processes, locations, and organisms that students have never heard of. If you want to see a case of cognitive overload, just talk to a historical geology student.

My proposed solution was a scalable timeline. A regular old timeline is helpful for organizing events in chronological order, and it could be modified to include the divisions of the Geological Time Scale. However, a regular old timeline is simply not up to the task of displaying the relevant timescales of geological events, which vary over at least six orders of magnitude. It is also not up to the job of displaying the sheer number of events that students must know about. A scalable timeline would solve those problems by allowing students to zoom in and out to view different timescales, and by changing which events are shown depending on the scale. It would work just like Google Maps, where the type and amount of geographic information that is displayed depends on the map scale.

Doesn’t that exist already?

My first round of Google searches didn’t turn anything up, but more recently round two hit paydirt… sort of. Timeglider is a tool for making “zoomable” timelines, and allows the user to imbed media. It also has the catch phrase “It’s like Google Maps but for time,” which made me wonder if my last post was re-inventing the wheel.

ChronoZoom was designed with Big History in mind, which is consistent with the range of timescales that I would need. I experimented with this tool a little, and discovered that users can build timelines by adding exhibits, which appear as nodes on the timeline. Users can zoom in on an exhibit and access images, videos, etc.

If I had to choose, I’d use ChronoZoom because it’s free, and because students could create their own timelines and incorporate timelines or exhibits that I’ve made. Both Timeglider and ChronoZoom would help students organize information, and ChronoZoom already has a Geological Time Scale, but there are still features missing. One of those features is adaptive formative assessments that are responsive to students’ choices about what is important to learn.

Learning goals

There is a larger narrative in geological history, involving intricate feedbacks and cause-and-effect relationships, but very little of that richness is apparent until students have done a lot of memorization. My timeline tool would assist students in the following learning goals:

  • Memorize the Geological Time Scale and the dates of key event boundaries.
  • Memorize key events in Earth history.
  • Place individual geological events in the larger context of Earth history.

These learning goals fit right at the bottom of Bloom’s Taxonomy, but that doesn’t mean they aren’t important to accomplish. Students can’t move on to understanding why things happened without first having a good feeling for the events that took place. It’s like taking a photo with the lens cap on- you just don’t get the picture.

And why assessments?

This tool is intended to help students organize and visualize the information they must remember, but they still have to practice remembering it in order for it to stick. Formative assessments would give students that practice, and students could use the feedback from those assessments to gauge their knowledge and direct their study to the greatest advantage.

How it would work

The assessments would address events on a timeline that the students construct for themselves (My Timeline) by selecting from many hundreds of events on a Master Timeline. The figure below is a mock-up of what My Timeline would look like when the scale is limited to a relatively narrow 140 million year window. When students select events, related resources (videos, images, etc.) would also become accessible through My Timeline.

Timeline interface

A mock-up of My Timeline. A and B are pop-up windows designed to show students which resources they have used. C is access to practice exercises, and D is how the tool would show students where they need more work.

Students would benefit from two kinds of assessments:

Completion checklists and charts

The problem with having abundant resources is keeping track of which ones you’ve already looked at. Checklists and charts would show students which resources they have used. A mouse-over of a particular event would pop up a small window (A in the image above) with the date (or range of dates) of the event and a pie chart with sections representing the number of resources that are available for that event. A mouse-over on the pie chart would pop up a hyperlinked list of those resources (B). Students would choose whether to check off a particular resource once they are satisfied that they have what they need from it, or perhaps flag it if they find it especially helpful. If a resource is relevant for more than one event, and shows up on multiple checklists, then checks and flags would appear for all instances.

Drag-and-drop exercises

Some of my students construct elaborate sets of flashcards so they can arrange events or geological time intervals spatially. Why not save them the trouble of making flashcards?

Students could opt to practice remembering by visiting the Timefleet Academy (C). They would do exercises such as:

  • Dragging coloured blocks labeled with Geological Time Scale divisions to put them in the right order
  • Dragging events to either put them in the correct chronological order (lower difficulty) or to position them in the correct location on the timeline (higher difficulty)
  • Dragging dates from a bank of options onto the Geological Time Scale or onto specific events (very difficult)

Upon completion of each of the drag-and-drop exercise, students would see which parts of their responses were correct. Problem areas (for example, a geological time period in the wrong order) would be marked on My Timeline with a white outline (D) so students could review those events in the appropriate context. White outlines could be cleared directly by the student, or else by successfully completing Timefleet Academy exercises with those components.

Drag-and-drop exercises would include some randomly selected content, as well as items that the student has had difficulty with in the past. The difficulty of the exercises could be scaled to respond to increasing skill, either by varying the type of drag-and-drop task, or by placing time limits on the exercise. Because a student could become very familiar with one stretch of geologic time without knowing others very well, the tool would have to detect a change in skill level and respond accordingly.

A bit of motivation

Students would earn points for doing Timefleet Academy exercises. To reward persistence, they would earn points for completing the exercises, in addition to points for correct responses. Points would accumulate toward a progression through Timefleet Academy ranks, beginning with Time Cadet, and culminating in Time Overlord (and who wouldn’t want to be a Time Overlord?). Progressive ranks could be illustrated with an avatar that changes appearance, or a badging system. As much as I’d like to show you some avatars and badges, I am flat out of creativity, so I will leave it to your imagination for now.

Categories: Assessment, Learning strategies, Learning technologies | Tags: , , , , | Leave a comment

How to make sense of historical geology

Imagine that someone changed the clock on you, breaking the day into irregular blocks, and giving the blocks names and symbols in no systematic way. Now imagine that you are given a list of events to memorize- activities of people you don’t know at places with which you are unfamiliar:

Fantasy clock with radiolarians

No, they’re not aliens. They’re radiolarians. And aren’t you glad you don’t use this to tell time?

During the Early Fizz, Pierre Bezukhov and Cthulu squared off on Callisto. By Middle to Late Fizz, Pierre Bezukhov had the advantage, so Cthulu migrated to Kore. At the Fizz-Zoot boundary, Dmitry Dokhturov and a shoggoth appeared on Europa, but both went extinct by the end of the Zoot, likely due to a lack of habitat. Beginning in the Flap, Callisto, Europa, and Taygete began a collision that culminated in their merger by mid-Flap. Land bridges that formed allowed the migration of Anna Mikhaylovna Drubetskaya from her original habitat on Taygete, leaving a niche open, and allowing Nyarlathotep to diversify.

Now stuff that in your head so I can ask you about it on an exam, in no particular order.

If you are familiar with the moons of Jupiter, the characters of War and Peace, or the fiction of H. P. Lovecraft, then you might have a chance at remembering some of the names, but their relationships would probably be new to you. This is the scenario faced by students taking introductory historical geology.

The clock they have to work with is the Geological Time Scale– a way geologists have of carving up Earth’s 4.5 billion years of history into chunks that reflect key events or phases. The chunks are not the same size, and there are chunks within chunks. The exact dates when each chunk (or sub-chunk) starts and ends are moving every few years as geologists get better information about the timing of key events that define the boundaries. There is no system- you just have to memorize it, and you’d better do it in a hurry, because everything you will learn about the Earth’s history will be described in terms of the Geological Time Scale.

Aside from learning this new clock, students must also learn the names of extinct and extant organisms, the names and histories of various continents and oceans, extant or otherwise, and the geological processes that have influenced those organisms, continents, and oceans. Did I mention that students usually have to learn a range of dates, because we can’t be sure of the actual date, and/or because the event happened over millions or hundreds of millions of years? Oh, and one more thing- the dates of different events will overlap to varying degrees, and the story lines will be almost impossible to disentangle from each other. But don’t worry- the exam is multiple choice.

The obvious way to organize all of this information is a timeline, and most textbooks have a version of the Geological Time Scale with some dates and key events marked on it. The problem is that to construct a timeline with all of the information that students need, you would have to devote a book to that alone, so most of these are just Geological Time Scales with some pretty pictures attached. The Geologic Time Spiral (below), showing Earth history spiraling away from the beginning of time, is a classic, and fascinating to look at, but of limited use to my students.  The durations of the events pictured are gross approximations, there is no description of those events, and there is no sense of the spatial changes that occurred. The timeline also glosses over the multiple story lines in Earth history, and the complex interconnections between story lines.

A spiral diagram illustrating the evolution of life on Earth through geological time

Geological Time Spiral: The names of units within the Geological Time Scale are written along the edges.

How to fix it

Make it adapt to scaling

What’s needed is a timeline in electronic format, but not just any timeline- it should be a scalable timeline. Users must be able to zoom out to see big-picture, long-term history, or zoom in to see the finer details. It would be the temporal analog to Google Maps, where the details which appear, including the divisions of the Geological Time Scale itself, depend on the scale. This would solve the problem of the necessarily limited amount of information in current timelines, but it would also do something more important. Users would be able to easily go back and forth between scales to understand how events are situated in a broader context. This is what you do every time you are planning a route to a new address- look at the larger map of the city to see the main thoroughfares, then zoom in to the streets within a particular neighbourhood, then zoom out again to remind yourself where the neighbourhood is relative to the freeway. Then you might zoom in again to the exact address, and depending on the tool you are using, you might look at a picture of the building that you are headed to.

Show cause and effect relationships

In Google Maps, you can see how streets are connected to each other. In Earth history, individual story lines are interconnected in the same way, and the complexity of city streets is probably not a bad analogy for the complexity of these interconnections. The scalable timeline would also show branches that link one story line to other stories, so a user could follow a single timeline, or choose to follow a branch and see how another series of events was impacted by the first story line. Because of how complex the interactions are, these branches would also have to appear or disappear depending on the scale, and depending on which timeline is being viewed.

Add multimedia

Like Google Maps, where resources like photographs, or information like phone numbers are linked to particular points in space, the timeline would have resources linked to a particular point in time, to a broader range of events, or to branches that connect related events. There could be pictures of the organisms that existed, or videos to explain a concept or expand on the details of an event. This would replace the limited images in the timelines that exist at present.

Add a responsive map

The scalable timeline should have an easy way to view the geographic location of a particular event, if it happens to occur in a specific place. This would require an omnipresent world map that lights up in the right spots to correspond to a particular event, but which also changes to reflect the shifting positions of the continents. The map would show where an event happened, but also where climate zones are, where glaciers are present, and where other key contemporaneous events occurred.

Get hypothetical

Hypothetical timelines could be introduced to consider alternative histories. For example, what would have happened if Earth had never been hit by an extraterrestrial object 65 million years ago? Would we even exist if mammals hadn’t been able to take over niches left open by the extinction of the dinosaurs? Or would the dinosaurs have gone extinct anyway for some other reason? Hypothetical timelines could be places to host discussions.

No more Brontosauruses

Brontosaurus illustration from 1896

Brontosaurus, redlined. Skeleton illustration appeared in “The Dinosaurs of North America” by O. C. Marsh (1896)

A timeline of this nature would be much easier to update as new data become available, or as the thinking about Earth history changes. In the popular Golden Guide to Fossils, which some of my students use, there still exists an entry for  Brontosaurus. Brontosauruses were invented by mistake in 1879 because Othniel Charles Marsh was in a rush to publish and didn’t realize that his new dinosaur find was just an adult version of a juvenile dinosaur he had already documented, called Apatosaurus.  The  iconic dinosaur that came to be known as Brontosaurus was actually Apatosaurus with the wrong head attached. The Brontosaurus story could be corrected with a few keystrokes and turned into a teachable moment about the challenges of interpreting paleontological data.

Why would it work?

It would work because narratives are better than lists. The standard timeline offers a way to summarize some of the events in Earth’s history, and to express temporal relationships as spatial ones, but it doesn’t go far enough to make the events into a meaningful whole. A list of seemingly isolated events is just that- a list. It takes context and meaning to make it a story, and stories are things we can remember and understand. There’s a reason you need to write down your grocery list to remember it, but you don’t need notes to be able to relate a relatively trivial story about what your dog did the other day. Whether you get all of the groceries you need or not will likely have a bigger impact on your life than if you can remember your dog story, but if you remember the dog story, it’s because it means something to you. A scalable timeline is a dog story rather than a grocery list because it will make it easy to examine the relationships between events in Earth history, and to synthesize essential details into a meaningful whole.

 

Categories: Learning technologies, Teaching strategies | Tags: , , , , | 2 Comments

Blinkie and the Valley of Confusion

One of my projects these days is MOOCery. MOOCs are Massive Open Online Courses- courses offered for free online, and open to everyone. The formats these courses take will vary, but they often include lectures on video, discussion forums, and assignments. I’m working on two courses right now, and it was very tempting to sign up for more.

One of the courses is Design and Development of Educational Technology, offered by the MIT. I’ve been curious about Ed Tech since taking Introduction to Learning Technologies at the University of Saskatchewan, and the course seemed a good opportunity for further exploration. Part of the course involves reflecting on both old and new educational technologies, and I have a bit of homework in that regard: comparing and contrasting a new and an old technology.

The New: Demystifying the Valley of Confusion

A simplified geological map

Geological map of the Valley of Confusion

First-year geology students are asked to do a very complex task: view a two-dimensional representation of the intersection of a complex geometric surface with three-dimensional subsurface structures (also called a geological map), and understand what the heck they are looking at.

Here’s an example of a simplified geological map. In this image, the coloured patches represent different rock layers that you would see if you could strip away all of the soil and expose the rocks beneath. If I were to ask you how those rock layers were arranged within the Earth, you might say that they were folded. It certainly looks that way. In fact, they are not folded at all. They are in flat layers, all tilting to the east at an angle of 30 degrees. Students are expected to arrive at that interpretation by looking at maps like this one.

The map in the image above is actually a birds-eye view of this:

Screen Shot 2014-10-21 at 11.57.55 PM

Valley of Confusion in three dimensions: not as confusing

Here you can see on the side of the block that the rocks are arranged in flat, tilted layers, not curved and folded ones. The reason they look folded on the map is that the surface is actually a valley. The numbered black and grey lines in the first image represent the elevations at different points in the valley.

This is why Visible Geology is so useful. It is an online tool that allows users to construct and view three-dimensional models of geological structures. Users start with a blank cube, then add layers to represent different rocks. They can manipulate the layers by tilting them, folding them, or faulting them. The cubes can be rotated to allow a view of all of the sides. Users can also print their models, cut them out, and fold them up into cubes. Visible Geology is also a good example of what Seymour Papert referred to as a low floor: it is very easy to get started with, and users get results immediately. I created both of the images above in under 5 minutes using Visible Geology.

Visible Geology is particularly interesting because it began as a project by a student who was learning about geological maps and geological structures. He happened to have programming skills in MATLAB which allowed him to build visualizations to help himself and his peers. From there, he developed Visible Geology into an online tool.

The goal of Visible Geology is to make it easy to visualize the three-dimensional structures formed by rocks. The lament I hear most often from my first-years is, “I just can’t see it!” Visible Geology solves that problem by allowing students to explore different configurations and scenarios. It is engaging because it has an interface that is user-friendly: it is colourful, it’s functions are intuitive, and it is not at all intimidating (unless you find large buttons with pastel-coloured illustrations intimidating). Students can learn from Visible Geology by experimenting, but would also benefit by attempting to reproduce the geological maps and structures in their assignments.

The Old: Blinkie Computes

owl calculator

Blinkie (The National Semiconductor Quiz Kid)

At some point in the early 1980’s, I received a National Semiconductor Quiz Kid as a gift. This toy (henceforth referred to as Blinkie) was a calculator that looked like an owl. Blinkie didn’t work like a regular calculator, though. When you entered a mathematical operation (“4 + 3 =”) he calculated the answer, but he wouldn’t tell you what it was. You would have to supply the answer. If your answer were correct, he would blink a green LED eye at you. If it were wrong, he winked a red LED eye. Blinkie came with a book of math questions, and was intended as a drill tool for children learning their pluses, take-aways, timeses and divide-bys.

I’m sure Blinkie was effective as a math teaching tool (I can add, after all), but that isn’t my main recollection of Blinkie. I liked Blinkie because the keys made a satisfying click when you pressed them. I liked Blinkie because if you turned out the lights and hid under the covers, then covered his eyes with your thumbs, the red eye would glow through your thumb, but the green one wouldn’t. Most of all, I liked Blinkie because I could use him to check my math homework, and not feel that I was cheating. So, although Blinkie was intended to teach me math (and perhaps save my mom some time making flash cards), his most substantial benefit was to reduce my anxiety. Once you do that, the math comes a lot easier anyway.

Blinkie Versus the Internet (or, Bringing An Owl To A Gunfight)

Comparing Blinkie to Visible Geology is not like comparing apples to oranges. Comparing apples to oranges is much easier than finding characteristics that Blinkie and Visible Geology share. They are very different tools.

For one thing, their approaches are very different. Blinkie was a tool for practicing math skills. He told you whether you got the answer right, or whether you got it wrong. Visible Geology is about exploring. It allows the user to be creative, and to experiment risk-free. It is about “what if?”

The motivation for creating these tools was also very different. I suspect that at least some of the motivation for building Blinkie was that new microprocessors had been developed, and that development had to be funded commercially. National Semiconductor had a hammer, and was looking for a nail. In contrast, Visible Geology was created by someone who experienced a need for a visualization tool, and built what he needed.

The most obvious contrast is the difference in technology, but it is also the least relevant. Blinkie is old technology, but back in the early 1980’s, he was pretty cool… heck, anything with lights and buttons was cool back then. The point is, he did his job, and I didn’t feel that I was missing out on anything. Today, as amazing as Visual Geology might be to a Blinkie-era person, it is nothing special technologically to the eighteen- to twenty-year-olds that I usually deal with. It does its job.

I was excited to discover both of these technologies, but for different reasons. Those reasons are related to context. I faced Blinkie as a learner. Blinkie was novel, and so was math. He and math were intertwined in a new tactile and visual experience. As far as I was concerned, Blinkie wasn’t for teaching me math, he was entertainment, and I just happened to be learning math at the same time. My perspective on Visible Geology is as a teacher. It is a tool that I’m excited about because it fills a definite need that my students have to see the three-dimensional structures they are working with. When I play with it, the purpose is to create a teachable object. There isn’t the same element of novelty and discovery as there was with Blinkie, back when math was something new.

I think that for my students, Visible Geology will be a Blinkie experience. They are discovering geology, and Visible Geology will be entertainment inseparable from learning. They can use it as a way to “check their homework” by comparing their expectations with the results of combining geological structures with different surfaces. It may even lessen the anxiety they feel when geological maps just aren’t making sense. Nevertheless, there is one thing they will be missing: Visible Geology will never make their thumbs glow.

 

 

Categories: Learning technologies | Tags: , , , , , | Leave a comment

Is plagiarism funny?

Generally I would say no, but I’ve tried to make an exception with a new video project.

A recent onslaught of assignments highlighted the futility (yet again) of what amounts to grading the textbook. My brain started churning out cartoons about the ridiculous ways students attempt to skirt the requirement of having to answer in their own words. Jeff Foxworthy’sYou might be a redneck if…” came to mind, and my productivity screeched to a halt: “It might not be in your own words if…” (Does that make Jeff Foxworthy my muse?)

The point was to get students thinking about plagiarism without taking a “thou shalt not” approach. I plan to build additional resources, including a video and/or handout with tips on how to answer in one’s own words. I like to point out that the textbook is one way to say something, but not the only way, and not necessarily the best way. And it isn’t about some pedantic exercise in avoiding a specific set of words- it’s about turning words on a page into knowledge… and that doesn’t happen unless you think about what those words mean.

This project is shorter than my last project, which could make the difference between students watching it and not. Another difference is that it consists of text, music, and my own drawings… so no fifteen takes required to get a voice-over without stumbling or stuttering. The drawings were the fun part. While I have at some pont generated drawings and paintings that look like actual objects and people in the real world, doing so quickly and consistently is another matter. I came up with scribble people after searching for examples of line figures that others have drawn, and then doing my best to create something else. At one time I would have opted for stick figures, but after discovering Randall Munroe’s brilliant webcomic, xkcd… well, you wouldn’t try to out-drip Jackson Pollock, now would you?

In the process of making this video, I learned some things that might come in handy for anyone trying a similar project.

Timing

If you’ve made the slides, then you know way more about them than a first-time viewer will, so you’re probably not the best judge of how fast the slides should move along. What worked great was having someone else advance through the slides using the “Rehearse” mode under the “Slideshow” tab in PowerPoint. This records the duration over which each slide is viewed. Not only did I get an idea of how much time viewers might need, it became very clear which slides would benefit from a redesign. Set the intervals for transitions between slides, and run it with “Use Timings” selected. Then it is a simple matter of starting and ending a screen recording.

Music

I am not musically astute. If you ask me about Country and Western music released between 1950 and 1969, or Tom Waits, or Leonard Cohen, I might be able to help you. Lyrics to “The Battle of New Orleans?” Got you covered. Otherwise, you’d best ask my husband, who has a much larger musical vocabulary, and likes to ask me “Who sings this?” when he knows full well I can’t answer. But I needed music, so what to do?

Where to get it

I learned that there is a lot of royalty-free music online, and a subset of that is free royalty-free music. There is the Free Music Archive, which, amongst other things, has recordings from Edison cylinders! How cool is that?! I also found Kevin MacLeod’s website, where he offers his music under a Creative Commons license. His music is searchable by genre as well as by “feel” (bright, bouncy, driving, mysterious, etc.). Each song comes with an excellent description, suitable for the musically challenged, which makes it clear what an appropriate context would be for that song.

How to use it

Odds are, your song and your video won’t be the same length. If the song is longer than your video, it is easy enough to fade out the volume at a convenient spot. If the song is shorter, it’s more difficult to maintain continuity. Some songs come with versions that are suitable for looping, or in versions of different lengths. You can buy a little extra time by delaying the start of the song slightly, and fading in the volume, and then fading out the volume at the end. You could perform audio surgery and create a Frankensong… but if amputating musical body parts and stitching them back together again isn’t for you, then throwing continuity out the window might be the better choice. I didn’t know any of these things when I started, and it took a lot of experimenting to get something I could live with. Hopefully villagers with pitchforks and torches won’t be a problem.

Categories: Challenges, Learning technologies | Tags: , , , , , , , | Leave a comment

Customer relationship management (CRM) as a paradigm in distance education

I’ve been trying to gain more insight into the changes that are coming to Athabasca University with the new contact centre model that I discussed in an earlier post. After finding AU’s report, Evaluating the relative efficiencies and effectiveness of the contact centre and tutor models of learner support at Athabasca University, I think I have a better idea of what’s going on.  AU is adopting a customer relationship management system, similar to those that are used to run the call centres of large businesses.  These systems have been adapted for use in higher education over the past decade.

The report outlines AU’s system as follows:

Under the Contact Centre model, undergraduate student advisors, available six days a week, field initial queries via fax, telephone, or e-mail, and act as the first point of contact for accessing other advising services. Using flexible, shared, and secure contact databases, contact centre advisors handle issues for which they have established answers and refer course-related inquiries to the appropriate academic experts. … “Frequently-asked question” databases are also available to students and advisors to answer some academic queries. If applicable, students are referred to faculty and part-time academic experts for academic assistance.

The report refers to a keynote address by James Taylor of the University of Southern Queensland at the 2001 ICDE World Conference on Open Leaning and Distance Education. In the address, Taylor describes the USQ’s e-University project which seeks to automate the delivery of information to students. Taylor explains that tutors’ responses to students’ questions are added to a database. Subsequent students’ queries are first run through the database to see if the question has already been answered. If so, the student is provided with that information. At the time of Taylor’s keynote address, tutors were involved in vetting the automated responses, but Taylor anticipated that this would soon become unnecessary. It is only when the answer does not exist in the database that a tutor is required to interact with a student, and as he put it,

As the intelligent object databases become more comprehensive, enabling personalized, immediate responsiveness to an increasing number of student queries, the institutional variable costs for the provision of effective student support will tend towards zero.

By this he means that regardless of enrollment numbers, the costs will remain the same because students will access the database rather than needing attention from tutors. The irony is, the more dedication and care that tutors put into answering questions, the more they hasten their own obsolescence.

The AU report emphasizes this outcome:

Most importantly, individually-tailored services can be provided to an increasing number of learners with the same economic resources by using knowledge management software to reduce the need for direct, human interaction in the teaching and learning process.

The AU report takes issue with the idea that student-teacher interactions are necessary. They point out that, according to the equivalency theorem of Anderson (2003), only one of the following types of interaction is required: student-teacher, student-student, or student-content. As long as one of these is done well, the others can be eliminated entirely with no negative consequences for the student. I wonder at the wisdom of leaving a group of students to their own devices, sans teacher or content, as a method of education, but perhaps I’m missing some nuance of the scenario.

My question as I read through the report was, “Does this work?” The report was written to present the results of a survey of students who had taken part in the initial roll-out of the contact centre model at AU. The data presented are a list of attributes of the contact centre and tutor models, and students’ ratings of the importance of those attributes.  I’ve summarized the results below.  Keep in mind that although the report says the survey asked about the importance of each of these attributes, some of the attributes read as though students were asked to rate the outcome of their interaction with the model in question.  I’ve kept the original wording so you can decide for yourself what the survey items mean.

Study results

Even if it were clear whether the survey were evaluating the perceived importance of or the perceived satisfaction with various attributes, it still wouldn’t answer my questions.

One thing I would like to know is how students felt about having an advisor with no specific knowledge of the course material answering their academic (i.e., course-matter specific) questions by referring to a database. The survey reports (item 2) that 76% of the 300 students sampled rated “Importance of talking directly with an academic expert for academic support” as “Important” or “Very Important.” I wonder if that number would have been even higher if it had said “communicating directly” rather than “talking directly.” Very few of my students “talk” to me because the vast majority communicate by email.  (The prevalence of email could also have affected the outcome of item 3.) More importantly, this item doesn’t answer whether or not students were happy with the amount of direct communication they had with an academic expert under one model or the other.

The report does not address how beneficial students felt either model to be in terms of their learning outcomes, and it does not provide any metrics such as differences in grades or retention. The closest it comes is addressing satisfaction with response times for academic assistance using each model (item 8). Read literally, the results are students’ rating of how important satisfaction is in this regard (i.e., how important it is that response times be satisfactory), but it is possible that students were actually asked how satisfied they were with response times.  Regardless, response time is not the same thing as help with learning.

Because this report did not tell me what I wanted to know, I spent the better part of a day searching for studies of similar systems, and the outcomes for learners.  Much to my chagrin, the only relevant thing I found was a paper by Coates et al. (2005) stating that there were no generalizable studies addressing this issue. The paper was very interesting nonetheless, and foreshadowed the present developments:

While ‘academic-free’ teaching may seem only a very distant prospect, major online delivery ventures already have business plans based on the employment of a limited number of academic staff who create content with the support of larger numbers of less expensive student support staff.

It also echoed my main concern: “What are the consequences of students increasingly seeking learning assistance from technology support staff rather than from teachers?”

The AU report concludes that there was no material difference in students’ satisfaction with response times between the two models, and that “[m]eans of first contact seems [sic] to be more effective under the Contact Centre model.” (If the last statement is based on item 4, it would appear the opposite is true.) Because there was a savings of over $60 for each student with the Contact Centre model,

Taken together, these results suggest that satisfactory educational experiences can be delivered under either model. Given this equivalency of outcomes, it is recommended that relative costs should primarily determine how student support is provided at Athabasca University.

After reading this report, my thoughts were (almost simultaneously) that response times are a dubious measure of how satisfactory an educational experience is, and that this point is likely moot for decision-making purposes at this stage. But maybe distance education is like the garment industry: at one time, it was a foregone conclusion that your clothes would be made to fit you. Now, most people aren’t particularly troubled by having to pick a ready-made garment from a clothing rack. It’s still a shirt, right?  Why should getting answers from a database/ advisor instead of from a teacher be any different?

In case you are wondering, yes, I do find the idea of purging humans from teaching to be disturbing. Aside from losing the human interactions and creative challenges that make teaching a meaningful undertaking, there are serious flaws in a system where students’ interactions with a course cannot be observed by someone who will ultimately be responsible for redesigning that course. In my present roles as tutor and course author, I have a very good idea of what is working and what isn’t, because of conversations I’ve had with my students. Sure, there are issues that commonly arise, and one could infer from the number of questions on a particular topic that something isn’t working. But if it were that easy to fix, I would have intuited the problems with my approach to begin with and avoided the issue altogether. It is only by communicating back and forth with students and asking specific questions that I can figure out exactly what’s going on. There is a diagnostic element in my relationship with students, and it is a crucial element for assisting students on a one-to-one basis, and for improving the course in general. The contact centre model is about removing the human element as far as possible.  (Wow!  At one time that statement would have been entirely facetious!)  If I were to participate in this system, I would have exactly one try to figure out what was going on, before my canned response would be distributed to all future students with a similar question. This model will result in lost access to valuable data, and these are data that can’t be recovered by a standard end-of-course evaluation by students.

There are some broader issues that I didn’t see addressed in this report, or in any of the promotional materials I read about customer relationship management software use in higher education, or in reports by administrative branches of different schools about the benefits of implementing these systems. What about confidentiality, for example? How are queries dealt with that could identify a particular student? What happens if students share personal information in their question? Does all of this become available for other students to access?

What about intellectual property rights? If students attach a file containing their own work when they query the system (which they can do), could their work be kept as an example when the question and response are added to the database? Who would use their work, and how?

What happens if there is an error in a canned response? What if I say “increase” when I meant “decrease,” or “north” when I meant “south?” If the advisor who is deciding on which response the student should receive doesn’t have the background to know the difference, will the error ever be caught? Or will it rear its ugly head in perpetuity?

What happens if we learn something new that changes everything? Will this system trap the old understanding forever, like an insect in amber? That is perhaps the most chilling outcome of handing off the job of teaching to an entity unable to think critically about the information it dispenses.

 

See what the students think: Instructional Model Survey

Categories: Distance education and e-learning, The business of education | Tags: , , , , , , | 2 Comments

Syllabus strategy: The Quick Start Guide

The manual for my coffee maker starts on page two with the words “IMPORTANT SAFEGUARDS” in very large print. This section cautions me against touching the hot parts, or swimming with the coffee maker, and also advises “Do not place this appliance… in a heated oven.”

The first part of page three lists what not to do to the carafe (“Avoid sharp blows…”). The second half concerns the dangers of extension cords, and ends by offering the helpful advice that if the plug won’t fit in the electrical outlet, you should turn it over so the big prong goes in the big hole.

Page 4 is the Table of Contents, page 5 covers features (“Comfort Fit Carafe Handle,” “Lighted ON/OFF Indicator”), and finally on page 8 it gets around to the coffee-making process.

There are 17 pages, and the only part I ever paid much attention to is the instructions for cleaning the coffee maker.  I only looked those up because the coffee maker has a “Clean Cycle” button, suggesting that my usual non-automated procedure might not apply.

So, let’s review. I haven’t read the manual for my coffee maker cover-to-cover because:

  1. I deemed the first several pages as not useful to me, and I concluded that the majority of the manual was likely to be that way.
  2. I know what I’m looking for, so I quickly scanned the manual to find those details, and filtered out everything else.
  3. It is 17 pages long.

I suspect that these points also sum up the reasons why my students won’t read the course syllabus. I haven’t electrocuted myself [with the coffee maker], so my assessment in point #1 was likely a reasonable one.  Not so for my students who don’t read the syllabus.

The ones I’m most concerned about are taking introductory physical and historical geology courses through the Centre for Continuing and Distance Education (CCDE) at the University of Saskatchewan. Their syllabi describe procedures that are unique to the distance education format, such as having to submit an application to write the final exam. More than one student has assumed that he or she could simply show up at the right time and place, and be permitted to write the exam, as with on-campus classes.

Syllabus for Geology 108/121

My syllabus design

For my face-to-face classes, I’ve designed a syllabus to address point #1 by putting the details that students are most likely to look for (e.g., textbook, grading scheme, contact information) as close to the beginning as possible. I’ve addressed point #2 by using sidebars with interesting images, facts, and quotations, to disrupt the scanning process. As for point #3, my syllabus is seven pages long, and that was a very tight squeeze.

For my CCDE courses, the CCDE puts together most of the syllabus following a modified version of the U of S syllabus template. They specify what information I am to supply, and indicate where I have the option to make additions or modifications. Whatever isn’t on the list stays as is. This arrangement allows me to add content, but it does not permit the kinds of modifications that I think are necessary to address points #1 and #2. The syllabi are 13 and 29 pages long, so there’s no help for point #3.

Quick start guide

The Quick Start Guide

These syllabi are not working, and I’m not allowed to fix them.  I fumed about this for a while, and then came up with an idea. Back when computer hardware still came with paper manuals, manufacturers often included a quick start guide. These were posters or pamphlets that showed simply and clearly the most basic steps needed to get up and running. I decided that my syllabus needed a quick start guide.

The quick start guide I came up with has some key features:

  • Fonts and layout that invite browsing, including images, plenty of white space, and text blocks of limited size
  • A place for key information (dates, contact information, assignment submission procedures) that is scattered throughout the syllabus
  • Details that are too important to leave to a chance encounter in the syllabus
  • Motivation to read the syllabus, including a “Top Ten Reasons to Read the Syllabus” list. The list combines humour with items in the syllabus that students usually ask about.

It is two pages long, so printable on a single sheet of paper.   It doesn’t look like any of the other course materials, and this is good, because curiosity motivates inquiry far better than obligation does. I’m trying it out for the first time this term, so we’ll see how it goes.

 

Categories: Challenges, Syllabus | Tags: , , , ,

The self-respect threshold

It turns out that Microsoft Excel can be more efficient than soul searching. A few days ago I received a general email sent to Athabasca University staff by the interim president, Peter MacKinnon. It was a progress report on the new Student Success Centre model for changing how students access services (including tutors like me) at AU. This is not a call centre. Everyone will tell you that, and it is quite a touchy subject. It may be the place you have to call (or email) to be directed to the service you need, including your tutor, and there might be tracking numbers involved (I’m still sketchy on the details), and your call (or email) might be logged, and then referred to the appropriate person or department… but it is not a call centre because telephones will not be the primary communication technology. So there.

I have a few problems with this. First of all, calling it the Student Success Centre sounds like a cynical branding exercise, even if it isn’t. Second, as far as I can tell (again, details are sketchy), my students will no longer be able to contact me directly. They will contact the [not a call] centre where someone will decide if it is really me that they need to talk to, or if someone else will do. If it is determined that the student does, indeed, need to contact me, I will be notified by the system to contact that student. The system will follow these transactions and apparently generate some statistics so appropriate oversight can be exercised and I can be presented with numbers to motivate my performance.

So why is this change happening? Because it will cost less. Cutting costs is done with fanatical zeal these days, and like fanatical zeal, it often does not involve consideration of the big picture. Some years hence, people will look at our gutted institutions and say, “Oops. I guess we needed that after all.” It seems that those responsible for the financial upkeep of public institutions like universities have forgotten the reasons for creating those institutions in the first place.

I am getting to Excel and soul searching. As you may have guessed, this [not a call] centre will cost less mostly because tutors will be paid less. At present, AU tutors are paid in two ways. One is a flat rate based on the number of students a tutor is assigned. This is called block pay. The second way is based on the number of assignments and exams that a tutor has graded. There are also allowances for computer use, phone, and internet expenses.  The block pay is meant to cover the time I spend communicating with students, and the related work. If I were to have a month where I graded no assignments or exams, then my wages would consist of the block pay, plus allowances. Under the new system, block pay will be eliminated. I’ve read that tutors will be paid for each interaction with a student, but to my knowledge, AU has not officially commented on exactly how this will work… likely an indication that they don’t expect it to win over any tutors.

Update (13 June 2016): Here is a list of items that Academic Experts will be paid to do.

Ok, almost to Excel and soul searching. The email from Peter MacKinnon raised my ire because it reflected many of the attitudes toward tutors that I’ve heard expressed elsewhere. I’m hesitant to post an internal email, but if you read through the comments on this post about the [not a call] centre by Tony Bates, you’ll get the idea. I would draw your attention in particular to the comments of Professor Rory McGreal of Athabasca University (who is apparently not averse to the term “call centre”):

In the call centre, they will reach a professional immediately. This professional, unlike the tutor, will have training in the most common questions, queries, concerns that student have regarding administration, schedules, programme requirements, etc.

This quote is helpful because now you won’t mistake a tutor for a professional.

Let’s try another:

The call centre model is especially designed to provide students with the response they need as soon as possible. The previous tutor model allowed for a reasonable call back time of 48hrs. This is no longer acceptable. Students demand the response they need when they need it.

The 48 hour response time exists because tutors are not paid enough to have tutoring as their only employment. It allows for flexibility so that it is possible to manage both jobs. I’m not sure if this means the 48 hours will be changed to “immediately, dammit!” or if it is meant to imply that being told your tutor will contact you counts as a response. Either way, this strikes me as being extremely out of touch with what the reality is for tutors… and it makes students sound like brats.

I crunched some numbers to see what a worst-case scenario might look like, such as a very slow month for grading. You might think this scenario would translate to a month of free time, but it doesn’t. There are a number of activities I engage in to assist my students, that aren’t represented in the pay scheme. Also, I have to keep an eye on email and make sure I respond within the required time frame (either 48 hours or immediately, dammit, I’m not sure which). Finally, I have to keep my schedule sufficiently open and not go too far from home so that I can take care of any tasks that might arise. This last point in particular amounts to a very definite opportunity cost.

The result of these calculations was laughable. I could make more money dog-walking (I like dogs… that wouldn’t be too bad), or sewing sock monkeys, or selling pressed flowers on Etsy. So, while I might otherwise have done some soul searching about my place in an organization where the people calling the shots clearly view me with contempt, Excel made it pretty easy to see the point at which it just wouldn’t be worth it to stay. We’ll call that the self-respect threshold, and I’ll be keeping a very close eye on it.

 

For an analysis of the rationale behind the call centre model: Customer relationship management (CRM) as a paradigm in distance education

Categories: Distance education and e-learning, The business of education | Tags: , , , , , | 2 Comments

Blog at WordPress.com.