Assessment

Student-Curated Video Collection: An Activity

AEG/Telefunken television from 1937. This was newfangled back when I started screening videos for this course. Eckhard Etzold, CC BY-SA 2.0

AEG/Telefunken television from 1937. This was newfangled back when I started screening videos for this course. Eckhard Etzold, CC BY-SA 2.0

I’ve been working on revisions to a distance-education physical geology course, and attempting to make it more interactive by offering videos. Have you ever tried to source relevant and accurate videos for multiple topics across multiple course modules? It involves going through hours and hours of videos, and rarely finding one that is directly on point or without problematic inaccuracies. My search technique has evolved to skipping anything longer than 5 minutes that doesn’t come with a transcript or clear description, and then screening the video at 1.5x speed.

So what to do about getting reliable videos without spending most of your adult life in the attempt… well, one school of thought would say let the students do it. I experimented with this kind of activity a few years back, but didn’t have an opportunity to deploy it full-scale. Here are the instructions I provided, with annotations. If you try it, let me know how it goes!

Curating Videos for Historical Geology

In this assignment you will assemble a collection of videos and complementary resources for historical geology students. You will work from the TED Ed* Lessons Worth Sharing video collection, Awesome Nature. This collection can be found at http://ed.ted.com/series/awesome-nature.

*I chose TED Ed because the videos are short. The student who did this moved on to TED Talks, which are  longer. I’d advise limiting the length of videos if you don’t want to spend hours watching videos in order to grade the results. If I were doing this today, I’d also recommend the fabulous video collection at MinuteEarth.

Your work will form the basis of a collection of resources to be made available to future students in Geology 109. If you wish, you will be acknowledged as the curator of the resources when they are posted, although I reserve the right to make any modifications that might be necessary to optimize the effectiveness of the collection.

Rationale

In the Independent Studies version of Geology 109, students do not have access to video lectures. Sometimes the textbook is unclear or written in too technical a fashion for students new to the topic to immediately understand what is being said. Videos designed by someone with a different perspective on the topic can be very helpful for reinforcing concepts, or clarifying points of confusion.

The problem is that not all videos are created equal. Some have factual errors, or even seek to mislead viewers. Some could benefit from clarifications. The task of looking for and vetting videos requires an understanding of the objectives a video should satisfy, and an assessment of how well the video accomplishes those goals. It also requires that viewers understand why they are watching the video and what they should get out of it. When an instructor looks for videos, he or she has an idea of what students find difficult, but it is really the students themselves who can most accurately identify where they need help, and what helps the most.

Your task

  1. Identify a video that satisfies one or more of the learning objectives for Geology 109. Provide the name of the video, and the link.
  2. Write an overview of the video. This should not simply restate the title of the video, but should summarize its contents in three or four sentences.
  3. List the learning objectives from the Geology 109 Course Guide that the video covers, and indicate which chapter they are from.
  4. Identify three key questions that the video answers. The questions should not be a restatement of the learning objectives, and should make it clear to other students why they would find the video useful. The questions will take the following form:
    1. Have you ever wondered …?
    2. Would you like to know how [something works or happens/ happened]?
    3. Have you ever been confused by …?
  5. Identify five terms that are technical in nature, and that are key to understanding the topic of the video. Define those terms in simple language, using your own words.
  6. Identify three “loose ends,” and explain the loose ends so that others watching the video will not be confused by them. The “loose ends” could be:
    1. Points that could be expanded upon
    2. Points that might leave some confusion in the minds of students watching the video
    3. Factual errors (hopefully there won’t be any of those)
    4. Points that are inconsistent with something in the course materials (e.g., competing hypotheses, more recent information, etc.)
  7. Write ten multiple choice questions so students can test their knowledge after watching the video. Supply the correct answers. The questions should cover key points. A good set of multiple choice questions will have the following characteristics:
    1. Four answer options (a through d)
    2. Little to no use of answer options like “all of the above” or “none of the above.”
    3. It should not be obvious to someone with no prior knowledge of the topic which is the correct answer. (Over-simplified questions are not helpful when trying to understand a topic.)
    4. Questions should be relevant to the topic of the video and to the learning objectives.
    5. After doing the questions, it should be clear to students what key points they have not understood.

Deliverables

You will write up each video following the layout supplied at the end of this document. This layout is designed to be compatible with the Blackboard system. The specific software you use to create the write-up is not important, nor is the font. (Blackboard has some formatting limitations, and formatting must be done within the Blackboard text editor, so this is something I will have to do afterward.)

Grading

Each write-up is worth up to 10 points. Those points will be calculated as follows:

  • Is the video relevant to Geology 109, and is the relevance clearly explained? (2.5 points)
  • Are all of the elements in points 1 through 7 above provided (e.g., the learning objectives, multiple choice questions, etc., are present)? (2.5 points)
  • Is the write-up scientifically accurate (e.g., definitions are correct, multiple choice answers are correct, etc.)? (5 points)

You may curate as many videos as you like*, however the maximum possible score for the assignment portion of the class will be 100%.

*This assignment was designed for a specific student. You may wish to rethink the “as many as you like” policy, or turn it into a group project to reduce the workload.

Format for submission

Square brackets mean text that you will insert. Text in italics are my notes, and don’t need to be included in your write-up.

[Video title]

[url]

 

Summary

[Three to four sentence summary of the video topic]

 

Why watch this video?

  • Have you ever wondered […]?
  • Would you like to know how [something works or happens/ happened]?
  • Have you ever been confused by […]?

 

This video addresses the following learning objectives for Geology 109:

  • [Learning objective], Chapter [chapter number]
  • [Learning objective], Chapter [chapter number]
  • [as many additional points as necessary]

 

Some key terms used in this video are:

[term 1]: [definition]

[term 2]: [definition]

[term 3]: [definition]

[term 4]: [definition]

[term 5]: [definition]

 

Special notes

  • [Loose end 1, explanation]
  • [Loose end 2, explanation]
  • [Loose end 3, explanation]

 

Note: these could take the form of, “In the video, [topic] is mentioned, but [concept] isn’t explained. Here is what it means,” or “The video says [this] about [topic], but in the textbook it says [that].   The difference is [reason].”

 

Self-test

[Questions 1 through 10]

 

[Solutions (e.g., 1a, 2b, 3d, …)]

 

Deadline

All write-ups must be submitted on or before Monday, March 30th 2015.

 

Categories: Assessment, Distance education and e-learning, Learning strategies, Learning technologies, Teaching strategies, Uncategorized | Tags: , , , , | Leave a comment

The Case for Being a Nitpicky Grader

axis

I’ve always had a sense of the educator I didn’t want to be. To this day I remember the prof who became annoyed with endless questions and finally huffed, “My five-year-old could get this!” Student me, though stumped, decided to try work it out on my own. If I couldn’t get it, it was a relatively small thing on which to take a hit when it came time for exams.

These days when I come across a topic that seems ridiculously simple, but students aren’t getting it, I try to get their input on what the topic looks like to someone new to the subject. I use that input to come up with a more effective strategy to tackle it. I’m not that prof.

I used to not be the nitpicky-grader prof either. You know the one- they took points off for the tiniest infraction, and you could never get it exactly right. I’ve had a change of heart on that one, though.

When a student made a small error on an assignment, I used to point out the error and explain the problem, but not take off points. It’s a minor error, right? They’ll do better next time. But regardless of how carefully I explained, the same errors would show up in the student’s work over and over again. Then I started taking off a half point for those kinds of errors. Guess what? Suddenly students decided those small details mattered.

I was somewhat taken aback that the only way to convince them to do it right was making it costly to do it wrong. Suddenly the student who was a chronic non-labeller of graph axes is producing clear labels with proper units. The student not bothering to spell technical terms correctly (I mean, c’mon you have spellcheck for your homework for dog’s sake!) suddenly learns the spelling. Importantly, those errors also disappear from exams.

The distinction between formative and summative assessment is that formative assessment is meant to be low stakes/ no stakes, and help students analyze their work to improve. Summative assessment is the higher stakes measurement of whether they’ve met course objectives or not. Formative assessment involves helpful hints, and summative assessment involves correct or incorrect.

But as it turns out, unless there is something at stake to distinguish “important” from “whatever,” formative assessment is “I’ll take it under advisement” assessment, and summative assessment is “it seems you neglected to do so when the stakes were much higher” assessment.

I wasn’t doing any favours by letting things slide in the hope students got the message, so now I’m that prof.

Categories: Assessment, Challenges, Teaching philosophy | 2 Comments

Clear As Fine-Grained Sediment Mixed With Water: A Discussion Forum

This week I’m presenting a poster at the Earth Educators’ Rendezvous. The poster is about a discussion forum activity that I do with my introductory physical geology students at St. Peter’s College. I’ve turned my poster into a blog post just in case anyone is thinking about trying a similar activity and would like to refer back to it. Alternatively, folks may simply want to confirm that some nut at an academic meeting designed a poster consisting largely of cartoons. Either way, here it is.Intro

Why

How

You can download a copy of the handout for this activity, including the rubric, here.

Examples.png

Strategies

This is a great resource from the University of Wisconsin-Stout for explaining online etiquette to students.

summary

Categories: Assessment, Teaching strategies | Tags: , , , , , | Leave a comment

Online Courses and The Problem That No-One Is Talking About

There are two kinds of online courses: those which are taught, and those which are facilitated. The distinction does not apply to the task of interacting with students. I’ve been both “teacher” and “facilitator,” and it’s exactly the same job from that perspective. The difference is one of autonomy, and it is a big difference.

The Gwenna Moss Centre is about to run another offering of their Introduction to Teaching Online course. Although I am a co-facilitator for this course, I would describe it as a course which is taught rather than facilitated. My co-co-facilitator and I discuss the course as it is running, and make adjustments on the fly when necessary. We take note of what worked and what didn’t, look at participants’ evaluations, and then modify the course as necessary for the next offering. Not only do we have the autonomy to make the necessary changes, it is expected that we will.

In Intro to Teaching Online, we assume that the participants will also be able to teach their online courses- that they will make pedagogical and logistical choices to respond to their students’ needs, and to make the course run as smoothly as possible. Also, that they will have the ability to revise as necessary and try new things. That’s how you teach an online course.

When you facilitate an online course, while you might take on the task of assisting students and grading their papers, what you can do beyond that is tightly restricted by a delivery model over which you have very little control. How little control will vary, but most likely it will be difficult or impossible to make substantive changes to what is taught, or how it is taught. Even if you designed the course in the first place, that “you” and facilitator you are completely different people as far as control over the course goes, and designer you lost any input as soon as the design contract was up.

If you are lucky enough to be able to request changes, the process is rather like having completed a painting, then being told you aren’t allowed to touch it anymore. If you want something to change, you must fill out a form describing in detail where the paint should go and how to move the brush. Someone more qualified than you will make the change. They might send a note back to you saying that they plan to improve your painting of a cow by adding spots. You must then explain at length that it is in fact a dog, and should not have spots. When the painting is finally modified, the dog is the wrong shade of brown. You decide it is best to not request modifications to your paintings in future.

Why does this matter? I don’t care how good you are- you never get a course exactly right the first time. If there aren’t any outright problems, then it soon becomes apparent where improvements can be made. Facilitator you gets to see the problems or areas for improvement, but must be content with grading papers and answering questions. If facilitator you is like facilitator me, this will drive you nuts. If facilitator you is subject to the same kinds of course evaluations as someone who can teach their course, and make it the best it can be, then this is not only unfair, but professionally dangerous.

While course quality is affected by this- especially if no-one sees a need to consult with facilitator you about how the course is going, or there are no mechanisms for facilitator you to communicate issues and be taken seriously- there is a bigger problem: the very integrity of the course.

At one time distance education was mostly intended to serve those who could not go to a brick-and-mortar institution for one reason or another. Maybe they had a family or a full-time job and couldn’t leave to go to school. Maybe they just couldn’t afford to move. Now things are different. While I don’t have any hard numbers, from what I can tell, at least 70% of my students are already taking classes at a brick-and-mortar school. They take an online class because they can fit it into their schedule better than one on campus, or it isn’t offered on campus at a time they need it, or they’re trying to get ahead/ complete their degrees over the summer.

What this means for the big picture is that students are far more likely to communicate with each other about the course than in the past. It might be two students who take the course together, or it could be someone who took it previously sharing information with someone currently enrolled. In the case that is causing me problems right now, a substantial number of students from one department at one school take the online course to fill a requirement. This is a facilitated course, so perhaps you can guess where this is going.

The students talk to each other. Some of it might be innocent enough, but some of it involves passing on assignments that I’ve graded to the next group of students who take the course. The course has not been updated substantively in some time, so the same assignments and exams still apply.

The problem has become ridiculous of late, with students submitting near-perfect assignments, all exactly alike plus or minus a few careless errors, and within record time. They get things right that no-one ever gets right. Clearly they are working together, but they are also referring to older assignments. I know this for certain for a few reasons: First, the correct answer will frequently appear after incomplete or even nonsensical work. They submit solutions with the answer that would have resulted if a typo, long since removed, was still in the question. They also plagiarize my comments from old assignments, sometimes reproducing them verbatim.

This course has a must-pass stipulation on the final exam. Normally that would be some comfort, because students who haven’t learned anything on the assignments would fail the exams. I’ve seen students with 95%, 99%, and 100% on assignments unable to break 20% on the final. (The exam isn’t that hard.) But over the past few months it has become apparent that the content of the exam has been shared. If not an actual copy, then a very good description of what it contains is in circulation. Exam grades have gone up, and students are regularly answering questions correctly which were rarely answered correctly in the past.

Ideally, if so many students who know each other are taking the course, the assignments should change frequently. In our hyper-connected world, it is almost certain that this kind of communication between students will happen. I even know of a homework-sharing website that has some of the solutions posted. The problem is that in order to change this, someone has to keep on top of the course full-time, and have the autonomy to make the necessary changes. The main consideration should not be the logistics of altering course materials. There’s no excuse for that when the relevant materials are or can be delivered online, and everyone and their dog knows how to upload a file to an LMS.

Nevertheless, the issue is that facilitators cannot be empowered in this way without disrupting the underlying structure of course delivery. Even more problematic is a culture amongst those who do run things- those who are not subject-matter experts but who handle the day-to-day operations- which views facilitators as incompetent, and unable to handle this responsibility. Not long ago I was handed an in-house guide to designing distance education courses. It warned readers at the outset that most faculty would be uncooperative and not understand how a distance education course should run. I felt ill, the way you would feel if you overheard your co-workers complaining about how useless you were. As I recycle that book I will contemplate with irony the damage this attitude has caused to distance education, and wonder if maybe I should take a chance and start the dog-washing business I’ve been thinking about.

There are many reasons to disempower facilitators, not the least of which is the cost savings from having them as casual workers instead of full-time ones. So here’s where I’m going to get in trouble for this post (if I haven’t already): if your concern is the bottom line, what happens when the ease with which students can cheat in your course makes other schools, employers, professional certification organizations, etc., decide that credit for your course is no longer meaningful? Even if cheating is less of a risk, what if word gets around that the course is hopelessly outdated or has problems? You don’t get enrollment, that’s what. And the people who communicate this aren’t going to be disgruntled facilitators. I’m the least of your worries. You need to worry about the students themselves who joke openly about cheating, and how little can be done about it, or who are discovered to lack skills or to have learning that is outdated.

There is a fundamental disconnect between what schools view as the appropriate way to structure a distance education program, and what actually works on the ground, when you’re expecting learning to happen. One involves online teaching and the other does not. There is a cultural gulf between those who have the power to do something about it, and those who can only look on in frustration. There are a lot of dogs to wash, but with most of them you have to spell out B-A-T-H rather than say the word, or they run off. A waterproof apron is useful, but not foolproof. You’ll need lots of towels.

Categories: Assessment, Challenges, Distance education and e-learning, Learning technologies, The business of education | Tags: , , , , , , , , , | Leave a comment

The Levitating Wiener Standard of Formative Assessment

Formative assessment, or informative assessment, as I like to call it, is the kind of evaluation you use when it’s more important to provide someone with information on how to improve than it is to put a number next to a name. Formative assessment might or might not include a grade, but it will include thoughtful and actionable feedback. Formative assessment of teachers is no less important than formative assessment of learners- both are needed for the magic to happen.

I struggle with how to get truly useful formative feedback from my students. There are different instruments for evaluating teaching, including SEEQ (the Students’ Evaluation of Educational Quality), but the problem with the instruments I’ve used is that they don’t provide specific enough information. Sure, there is a place where students can write comments to supplement the boxes they’ve checked off elsewhere on the form, but those spaces are often left blank, and when they’re not blank, they don’t necessarily say anything actionable.

I’ve concluded that I need to design my own questionnaires. But when I get down to the business of writing questions, it feels like an impossible task to design a survey that will get at exactly what I want to know. I do have a pretty high standard, however: the levitating wiener.

The mentalist and magician Jose Ahonen performs a magic trick where he presents a levitating wiener to dogs. You can watch the videos How Dogs React to Levitating Wiener (parts 1 and 2) below. These are fascinating videos… have a look.

The dogs in the videos have one of three reactions:

  1. It’s a wiener! Gimme that wiener! These dogs react as one might expect, focusing on the existence of the wiener rather than on the fact that it is levitating.
  1. How the heck are you doing that? These dogs ignore the wiener and focus on the palms of Jose’s hands instead. It’s as though they’ve decided that it doesn’t make sense for a wiener to be levitating, and he must be doing it by holding strings. In other words, these dogs are trying to figure out how he’s doing the trick, and they all seem to have the same hypothesis. (Incidentally, it’s probably the first hypothesis most humans would come up with.)
  1. This is wrong… it’s just so wrong. These dogs watch for a moment and then get the heck out of there. Like the dogs in group 2 they also don’t think wieners should levitate, but they are too appalled by the violation of normality to formulate a hypothesis and investigate.

To my mind, most of the teaching assessment instruments are more like having the dogs fill out the questionnaire below than watching them interact with a levitating wiener.

Formative assessment for levitating wieners (loosely based on the SEEQ questionnarie)

Formative assessment for levitators of wieners

If the participants checked “agree” or “strongly agree” for “Weiners should not levitate,” it could mean something different for each dog. A dog from group 1 might object to having to snatch the wiener out of the air as opposed to having it handed to him. A dog from group 2 might think the question is asking about whether wieners are subject to gravity. A dog from group 3 might be expressing a grave concern about witchcraft. If the dogs wrote comments (we’re assuming literate doggies here), their comments might clarify the reasons behind their responses. Or they might just say there should be more wieners next time.

Now contrast the questionnaire with the experiment shown in the videos. Because of the experimental design, I learned things that I wouldn’t even have thought to ask about- I just assumed all dogs would react like group 1. I learned things the dogs themselves might never have written in their questionnaires. A dog from group 2 might not have noted his interest in the engineering problems surrounding hovering hot dogs in the “Additional comments” section. It might not have occurred to a dog from group 3 to mention that he was frightened by floating frankfurters. Maybe neither dog knew these things about himself until he encountered a levitating wiener for the first time.

A formative assessment tool that is up to the levitating wiener standard would tell me things I didn’t even consider asking about. It would tell me things that students might not even realize about their experience until they were asked.  Aside from hiring a magician, any suggestions?

Categories: Assessment | Tags: , , | Leave a comment

Help for students, part 1: Breaking the curse of the unknown unknowns

Students often ask whether I can offer any tips on preparing for and writing exams. Sometimes they are new students who haven’t developed study strategies yet, and sometimes they have just become frustrated with strategies that don’t seem to be working for them. Sometimes they are panicked and desperate, and end their emails with “HELP” followed by several exclamation points. (Never a good sign.) So I thought it might be time to jot these things down in one place, rather than writing them over and over again in emails to unhappy students who waited to ask for help until it was too late.

If there is one thing that causes more problems for students preparing for exams than any other, it would be the unknown unknowns:

“…as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.”  Donald Rumsfeld, US Secretary of Defense, 12 Feb 2002

When studying, known knowns are the topics you are confident about, and which you are right to be confident about. Known unknowns are the deficits in your knowledge that you are aware of, and which you therefore have a chance to fix. Where you get in trouble, however, are the unknown unknowns- the deficits in your knowledge that you don’t realize exist. You can’t fix those because you don’t know they’re there. At least, you don’t know they’re there until you hit an exam question you didn’t realize you were unprepared for. Then they become known unknowns, but it’s too late to do anything about them.

Here are two examples of what a run-in with unknown unknowns can sound like. Unfortunately, I receive emails like this on a regular basis:

Sally:

“I realize I am not going to pass this course even with the 20+ hours I studied over the last week. I have trouble putting the definitions on paper, I remember reading them and seeing them but can’t find the definition…”

 Bert:

“I felt as though I at least I completed the test and did not leave it blank, and felt confident that half my responses where right, but must have gotten confused…”

Note: “Bert” and “Sally” are not the real names of these students, and may or may not reflect their gender(s).

Sally’s unknown unknowns turned into known unknowns during the exam. In contrast, Bert emailed me because he was shocked that his exam grade was so low- Bert’s unknown unknowns were so sneaky that he got right through the exam without even noticing them.

Both Sally and Bert blamed the exam format for their problems. Their exam was short answer, and they felt that if they had clues in the form of multiple choice questions, then things would have gone better. As Bert put it,

“… there is no way someone first year can be capable to do this, let alone without instruction, or scientific key terms without getting terms mixed up, since there is no multiple questions [for] deductive logical reasoning…”

I think that part of Sally’s and Bert’s problem was that they underestimated how much understanding they would need to be successful on the exam. Ultimately, though, exam format should not be an issue.  If you know the answers, it shouldn’t matter whether the question format is short answer, multiple choice, essay, or interpretive dance. If you know it, you know it, and if you don’t, it makes just as much sense to blame your pencil.

The main problem that Bert and Sally had is that brains can be deceiving. In Sally’s case, after more than 20 hours of studying, everything looked familiar to her brain, and she believed it when it told her that she was ready for the exam. Unfortunately for Sally, the appearance of the page was what was familiar, not the information on it.

For both Sally and Bert it would have been a simple matter to set a trap for the unknown unknowns: if Sally and Bert had put their notes away every few minutes and tried to explain verbally or in writing what they had just read, they would have found very quickly that they couldn’t do it. Then they could have fixed the problem. Unfortunately, this is very hard work and should not be done for more than 45 minutes or so without taking a break. In Sally’s case, after many sustained hours of studying, her brain would have been too tired to manage it. She probably continued reading and not absorbing partly because she was too tired to do anything else.

Some of the sneakiest unknown unknowns hide so well that you might need someone else’s help to find them. Those are the kind where you remember information, but don’t realize that you have some part of it incorrect. The best way to trap these is to work with someone who might be able to pick up errors in your understanding as you explain the course material to them. This could be someone else in the class, or just a friend who asks you questions by referring to the textbook. Here are a few strategies that I’ve found helpful for turning unknown unknowns into known unknowns:

  • Scare them out into the open: Imagine that your instructor were to call you out of the blue to ask you questions about the course. What would you not want him or her to ask you about? Along the same lines, what would you not want to be asked about on the exam?
  • Treat learning objectives as questions and attempt to answer them without looking at your notes.
  • Reorganize information into diagrams and tables. For example, if you made a table to compare and contrast Neanderthals and Cro-Magnons, you might find that you can say something about Neanderthal body size, but you don’t remember how that compares to Cro-Magnon body size.   Diagrams and tables have the added benefit of being much easier to remember than lists of facts.
  • Study by explaining topics out loud to yourself or a friend. There is a difference between reading facts and trying to mentally organize them so you can say them out loud, and that difference can be enough to throw you off balance and expose unknown unknowns.
Categories: Assessment, Challenges, For students, Learning strategies | Tags: , , , | 3 Comments

Why I don’t give extra credit assignments

I view extra credit assignments as problematic because they can be unfair to other students in the course, they don’t necessarily solve the problem of missed learning outcomes, and they’re a hassle for me.

Let’s say I’m teaching a carpentry class called Potting Sheds 101. Students sign up to learn how to build potting sheds. Their final exam is building a potting shed. They may or may not go into the potting-shed building industry after graduation. On the last day of class the final projects are evaluated. Bob’s potting shed is out of square, and collapses when the door is opened. Bob fails. Later I receive the following email from Bob:

“Hey, how are u? I’m Bob in Potting Shed 101. I failed my final project. It’s been a really hard month for me. I was sick for the last two weeks, plus I didn’t have money to buy the textbook or a hammer. I found the final project did not suit my learning style, and was shocked at how difficult it was. Talk about being expected to run before even learning to walk! I will definitely be commenting about this in the course evaluation. Plus I was delayed getting started because I had to borrow a hammer from the library, and it was recalled and still hasn’t been returned yet. Potting Shed 101 is the last class I need for my degree, and I don’t plan to build potting sheds for a living, but I really need to pass the class to graduate. Is there some extra credit work I could do to pass the course with a high enough grade to get my degree? I feel I already learned a lot, and I would need at least 65 to graduate.”

 So what should I do with Bob? Here are some considerations:

  • Bob knew he would need a hammer to build potting sheds. Other students made sure they had the supplies necessary before signing up. It is unfortunate that Bob doesn’t have a hammer, but does this justify extra credit work?
  • Bob says he was sick, but I can’t verify that independently. Previously, Bob didn’t say anything about being sick, but if he had I would probably have granted him an extension to complete his potting shed.
  • Bob should have expected that building a potting shed would be part of Potting Sheds 101, so I don’t accept his argument that the final evaluation was unreasonable.
  • Bob is suggesting that the class doesn’t mean anything to him, but is just a course that his program required for some reason, and that he won’t use the skills (although he still claims to have learned something).
  • Bob expects that whatever he will do for extra credit will get him at least 65% in the course, and can be done in time so that he will graduate as expected.
  • If I give Bob the opportunity for extra credit, are the other students any less deserving? Should they not be allowed extra credit projects too?

What if I cave in to Bob’s request? Bob suggests that he make ten bird houses for extra credit. Bird houses are not potting sheds, so he would be getting credit for doing a task that is much easier than the original task. Bob assumes that demonstrating a willingness to work hard is equivalent to demonstrating competency in potting-shed building. While a good work ethic is admirable, it is not the same as being able to build a potting shed. If Bob changes his mind about working in the potting shed industry, he will use the grade I gave him to convince an employer that he can build potting sheds. If Bob shows his grade in potting-shed building to prospective employers who don’t deal in potting sheds, they may take it as a sign that he is somewhat handy, has reasonable hand-eye coordination, and can handle complex tasks that require precision and attention to detail.

Let’s go one step further and assume I let Bob hand in his 10 bird houses. They are consistent with his skill at potting-shed building. Am I required to give him extra credit even though his work is substandard? If I don’t, must I allow him to do extra extra credit work?

What if the day after Bob hands in his 10 bird houses, Carrie sends me an email:

 “I heard you let Bob build bird houses for extra credit. Can I build bird houses for extra credit, too? I’d really like to improve my grade because I want to get into the Advanced Potting Sheds program.”  

This is a very competitive program, and if I let Carrie do the extra credit work, her grade would not reflect her skill at potting-shed building, but it would give her an advantage compared to other students who apply to the program.  Is that fair?

Then I hear from Marty:

“I heard you let Bob hand in bird houses for extra credit. I made some when I was in grade four. Can I hand those in for extra credit?”

If Marty has demonstrated the skill, does that not count? If he had brought a completed potting shed to class on the first day, should he have received credit for the course? Some would say yes.

Beatrice:

“I heard you were taking bird houses for extra credit. My neighbours have some. Can I get credit for those?”

I would have to explain to Beatrice that she must make the bird houses herself. She would then request step-by-step instructions on how to build a bird house, and ask if she could come to my office hours to get help.

On a box delivered to my front door, containing 20 bird houses with the “Made in China” stickers still attached:

“Here are my bird houses for extra credit. Thx. Pete”

In an email from the department head:

“WHY are you letting students build bird houses for credit in Potting Sheds 101? They’re supposed to be building POTTING SHEDS!”

You see, it’s just way too complicated.

Categories: Assessment, Challenges | Tags: , , , | Leave a comment

Plagiarism-proof assignments: The Up-Goer Five Challenge

up_goer_fiveOk, so there’s probably no such thing as a plagiarism-proof assignment, but I think I’ve got a reasonable approximation thereof.

It originated with my frustration with the perpetual struggle to have students in my distance education classes answer questions in their own words. My students are using their textbooks to answer questions, and many seem to feel that a textbook is the exception to the rule when it comes to plagiarism. Some simply don’t understand that they’re doing anything wrong. From experience, I can tell you that many people who are not my students also see it that way, and complaining about it is a great way to be branded as unreasonable. The problem, as I’ve documented before, is that students who copy from their textbook also tend to fail the class. After last term, I’ve decided that it’s in my best interest to consume alcohol before grading assignments. I’m not allowed to ignore plagiarism, but what I don’t see

Absent blissful ignorance, the only way to deal with plagiarism (without causing myself a variety of problems) is to change the assignments so that plagiarism isn’t possible. Now, if you’ve attempted to do this, you know it isn’t easy. A search online will give you tips like having students put themselves in the position of a person experiencing a historical event, and explaining their perspective on the matter. That’s something students (most likely) can’t copy from the internet. But suggestions like that are not especially helpful when the topic is how volcanoes work. (Although now that I think about it, “Imagine you are an olivine crystal in a magma chamber…”)

The solution came from my online source of comfort, xkcd. Randall Munroe, the creator of the webcomic, set himself the challenge of labeling a diagram of NASA’s Saturn 5 rocket (Up Goer Five) with only the 1000 most commonly used words in the English language. Soon after, members of the geoscience community took up the challenge of explaining their fields of research in the 1000 most commonly used words. Here are two examples from a blog post by hydrogeologist Anne Jefferson. Anne writes:

” So I decided to see if I could explain urban hydrology and why I study it using only the words in the list. Here’s what I came up with:

I study how water moves in cities and other places. Water is under the ground and on top of it, and when we build things we change where it can go and how fast it gets there. This can lead to problems like wet and broken roads and houses. Our roads, houses, and animals, can also add bad things to the water. My job is to figure out what we have done to the water and how to help make it better. I also help people learn how to care about water and land. This might seem like a sad job, because often the water is very bad and we are not going to make things perfect, but I like knowing that I’m helping make things better.

Science, teach, observe, measure, buildings, and any synonym for waste/feces were among the words I had to write my way around. If I hadn’t had access to “water”, I might have given up in despair.

But my challenge was nothing compared to that faced by Chris, as he explained paleomagnetism without the word magnet:

I study what rocks tell us about how the ground moves and changes over many, many (more than a hundred times a hundred times a hundred) years. I can do this because little bits hidden inside a rock can remember where they were when they formed, and can give us their memories if we ask them in the right way. From these memories we can tell how far and how fast the rocks have moved, and if they have been turned around, in the time since they were made. It is important to know the stories of the past that rocks tell, because it is only by understanding that story that we really understand the place where we live, how to find the things that we need to live there, and how it might change in the years to come. We also need to know these things so we can find the places where the ground can move or shake very fast, which can be very bad for us and our homes.”

Is that brilliant, or what?! To make it even better, Theo Sanderson developed a text editor to check whether only those words have been used. This is what happened when I typed part of the introduction to the chapter on volcanoes:

Up-Goer Five text editor

Yes, fortunately it has the word “rock.”

I decided to test-drive this with my class. I gave them the option of answering their assignment questions in this way. It’s difficult, so they got bonus points for doing it. A handful attempted it, and that was probably the most fun I’ve ever had grading assignments. If you’d like to give this kind of assignment a shot, there are a few things to keep in mind:

  • Students (and colleagues) may be skeptical. Explain that the exercise requires a solid knowledge of the subject matter (in contrast to paraphrasing the textbook) and is a very effective way for students to diagnose whether they know what they think they know. In my books, that gives it a high score in the learning per unit time category.
  • The text editor has some work-arounds, like putting single quotes around a word, or adding “Mr or “Mrs” in front of a word (e.g., Mr Magma). Head those off at the pass, or you’ll get “But you didn’t say we couldn’t!”
  • You may wish to allow certain words for the assignment or for specific questions, depending on your goals. For example, if I were less diabolical, I might consider allowing the use of “lava.” The other reason for not allowing “lava” is that I want to be sure they know what it means. In contrast, I probably wouldn’t make them struggle with “North America.”
  • Make it clear that simple language does not mean simple answers. I found that students tended to give imprecise answers that didn’t address important details. I don’t think they were trying to cut corners- they just didn’t think it was necessary. If I were to do this again I would give them a rubric with examples of what is and isn’t adequate.
  • Recommend that they write out the key points of their answers in normal language first, and in a separate document, and then attempt to translate it.
  • Suggest that they use analogies or comparisons if they are stuck. For example, Randall Munroe refers to hydrogen as “the kind of air that once burned a big sky bag.”
  • Make the assignment shorter than you might otherwise, and focus on key objectives. Doing an assignment this way is a lot of work, and time consuming.
  • And finally, (as with all assignments) try it yourself first.

In that spirit:

I like to make stories with numbers to learn what happens when things go into the air that make air hot. Very old rocks from deep under water say things that help make number stories. The number stories are not perfect but they still tell us important ideas about how our home works. Some day the number stories about how old air got hot might come true again, but maybe if people know the old number stories, they will stop hurting the air. If they don’t stop hurting the air, it will be sad for us because our home will change in bad ways.

Categories: Assessment, Challenges, Distance education and e-learning, Learning strategies, Learning technologies, Teaching strategies | Tags: , , , , , | Leave a comment

Time: The final frontier

Timefleet Academy logo: a winged hourglass made of ammonites

A logo begging for a t-shirt

Here it is: the final incarnation of my design project for Design and Development of Educational Technology– the Timefleet Academy. It’s a tool to assist undergraduate students of historical geology with remembering events in Earth history, and how those events fit into the Geological Time Scale. Much of their work consists of memorizing a long list of complicated happenings. While memorizing is not exactly at the top of Bloom’s Taxonomy (it’s exactly at the bottom, in fact), it is necessary. One could approach this task by reading the textbook over and over, and hoping something will stick, but I think there’s a better way.

I envision a tool with three key features:

  • A timeline that incorporates the Geological Time Scale, and “zooms” to show events that occur over widely varying timescales
  • The ability to add events from a pre-existing library onto a custom timeline
  • Assessments to help students focus their efforts effectively

Here’s an introduction to the problem, and a sketch of my solution. If your sensors start to detect something familiar about this enterprise then you’re as much of a nerd as I am.

Timefleet Academy is based on the constructionist idea that building is good for learning. Making a representation of something (in this case, Earth history) is a way of distilling its essential features. That means analyzing what those features are, how they are related, and expressing them explicitly. Ultimately this translates to the intuitive notion that it is best to approach a complex topic by breaking it into small digestible pieces.

Geological Time Scale

This is what you get to memorize.

As challenging as the Geological Time Scale is to memorize, it does lend itself to “chunking” because the Time Scale comes already subdivided. Even better, those subdivisions are designed to reflect meaningful stages (and therefore meaningful groupings of events) in Earth history.

There is an official convention regarding the colours in the Geological Time Scale (so no, it wasn’t my choice to put red, fuchsia, and salmon next to each other), and I’ve used it on the interface for two reasons. One is that it’s employed on diagrams and geological maps, so students might as well become familiar with it. The other is that students can take advantage of colour association as a memory tool.

Assessments

Assessments are a key difference between Timefleet Academy and other “zoomable” timelines that already exist. The assessments would come in two forms.

1. Self assessment checklists

These allow users to document their progress through the list of resources attached to individual events. This might seem like a trivial housekeeping matter, but mentally constructing a map of what resources have been used costs cognitive capital. Answering the question “Have I been here already?” has a non-zero cognitive load, and one that doesn’t move the user toward the goal of learning historical geology.

2. Drag-and-drop drills

The second kind of assessment involves drill-type exercises where users drag and drop objects representing events, geological time periods, and dates, to place them in the right order. The algorithm governing how drills are set would take into account the following:

  • The user’s previous errors: It would allow for more practice in those areas.
  • Changes in the user’s skill level: It would adjust by making tasks more or less challenging. For example, the difficulty level could be increased by going from arranging events in chronological order to arranging them chronologically and situating them in the correct spots on the Geological Time Scale. Difficulty could also be increased by placing time limits on the exercise, requiring that the user apply acquired knowledge rather than looking up the information.
  • The context of events: If drills tend to focus on the same group of events, the result could be overly contextualized knowledge. In other words, if the student were repeatedly drilled on the order of events A, B, and C separately from the order of events D, E, and F, and were then asked to put A, B, and E in the right order, there could be a problem.

The feedback from drills would consist of correct answers and errors being indicated at the end of each exercise, and a marker placed on the timeline to indicate where (when) errors have occurred. Students would earn points toward a promotion within Timefleet Academy for completing drills, and for correct answers.

Who wouldn’t want a cool new uniform?

How do you know if it works?

1. Did learning outcomes improve?

This could be tested by comparing the performance of a group of students who used the tool to that of a control group who didn’t. Performance measures could be results from a multiple choice exam. They could also be scores derived from an interview with each student, where he or she is asked questions to gauge not only how well events are recalled, but also whether he or she can explain the larger context of an event, including causal relationships. It would be interesting to compare exam and interview scores for students within each group to see how closely the results of a recall test track the results of a test focused on understanding.

For the group of students who have access to the tool, it would be important to have a measure of how they used it, and how often. For example, did they use it once and lose interest? Did they use it for organizing events but not do drills? Or did they work at it regularly, adding events and testing themselves throughout? Without this information, it would be difficult to know how to interpret differences (or a lack of differences) in performance between the two groups.

 2. Do they want to use it?

This is an important indicator of whether students perceive that the tool is helpful, but also of their experience interacting with it. Students could be surveyed about which parts of the tool were useful and which weren’t, and asked for feedback about what changes would make it better. (The option to print out parts of the timeline, maybe?) They could be asked specific questions about aspects of the interface, such as whether their drill results were displayed effectively, whether the controls were easy to use, etc. It might be useful to ask them if they would use the tool again, either in its current form, or if it were redesigned to take into account their feedback.

Timefleet in the bigger picture

Writing a test

All set to pass the test of time

Timefleet Academy is ostensibly a tool to aid in memorizing the details of Earth history, but it actually does something more than that. It introduces students to a systematic way of learning- by identifying key features within an ocean of details, organizing those features, and then testing their knowledge.

The point system rewards students for testing their knowledge regardless of whether they get all of the answers right. The message is twofold: testing one’s knowledge is valuable because it provides information about what to do next; and testing one’s knowledge counts as progress toward a goal even if you don’t get the right answers every time. Maybe it’s threefold: if you do enough tests, eventually you get a cape, and a shirt with stars on it.

Categories: Assessment, Learning strategies, Learning technologies | Tags: , , , , | 2 Comments

Building assessments into a timeline tool for historical geology

In my last post I wrote about the challenges faced by undergraduate students in introductory historical geology. They are required to know an overwhelming breadth and depth of information about the history of the Earth, from 4.5 billion years ago to present. They must learn not only what events occurred, but also the name of the interval of the Geological Time Scale in which they occurred. This is a very difficult task! The Geological Time Scale itself is a challenge to memorize, and the events that fit on it often involve processes, locations, and organisms that students have never heard of. If you want to see a case of cognitive overload, just talk to a historical geology student.

My proposed solution was a scalable timeline. A regular old timeline is helpful for organizing events in chronological order, and it could be modified to include the divisions of the Geological Time Scale. However, a regular old timeline is simply not up to the task of displaying the relevant timescales of geological events, which vary over at least six orders of magnitude. It is also not up to the job of displaying the sheer number of events that students must know about. A scalable timeline would solve those problems by allowing students to zoom in and out to view different timescales, and by changing which events are shown depending on the scale. It would work just like Google Maps, where the type and amount of geographic information that is displayed depends on the map scale.

Doesn’t that exist already?

My first round of Google searches didn’t turn anything up, but more recently round two hit paydirt… sort of. Timeglider is a tool for making “zoomable” timelines, and allows the user to imbed media. It also has the catch phrase “It’s like Google Maps but for time,” which made me wonder if my last post was re-inventing the wheel.

ChronoZoom was designed with Big History in mind, which is consistent with the range of timescales that I would need. I experimented with this tool a little, and discovered that users can build timelines by adding exhibits, which appear as nodes on the timeline. Users can zoom in on an exhibit and access images, videos, etc.

If I had to choose, I’d use ChronoZoom because it’s free, and because students could create their own timelines and incorporate timelines or exhibits that I’ve made. Both Timeglider and ChronoZoom would help students organize information, and ChronoZoom already has a Geological Time Scale, but there are still features missing. One of those features is adaptive formative assessments that are responsive to students’ choices about what is important to learn.

Learning goals

There is a larger narrative in geological history, involving intricate feedbacks and cause-and-effect relationships, but very little of that richness is apparent until students have done a lot of memorization. My timeline tool would assist students in the following learning goals:

  • Memorize the Geological Time Scale and the dates of key event boundaries.
  • Memorize key events in Earth history.
  • Place individual geological events in the larger context of Earth history.

These learning goals fit right at the bottom of Bloom’s Taxonomy, but that doesn’t mean they aren’t important to accomplish. Students can’t move on to understanding why things happened without first having a good feeling for the events that took place. It’s like taking a photo with the lens cap on- you just don’t get the picture.

And why assessments?

This tool is intended to help students organize and visualize the information they must remember, but they still have to practice remembering it in order for it to stick. Formative assessments would give students that practice, and students could use the feedback from those assessments to gauge their knowledge and direct their study to the greatest advantage.

How it would work

The assessments would address events on a timeline that the students construct for themselves (My Timeline) by selecting from many hundreds of events on a Master Timeline. The figure below is a mock-up of what My Timeline would look like when the scale is limited to a relatively narrow 140 million year window. When students select events, related resources (videos, images, etc.) would also become accessible through My Timeline.

Timeline interface

A mock-up of My Timeline. A and B are pop-up windows designed to show students which resources they have used. C is access to practice exercises, and D is how the tool would show students where they need more work.

Students would benefit from two kinds of assessments:

Completion checklists and charts

The problem with having abundant resources is keeping track of which ones you’ve already looked at. Checklists and charts would show students which resources they have used. A mouse-over of a particular event would pop up a small window (A in the image above) with the date (or range of dates) of the event and a pie chart with sections representing the number of resources that are available for that event. A mouse-over on the pie chart would pop up a hyperlinked list of those resources (B). Students would choose whether to check off a particular resource once they are satisfied that they have what they need from it, or perhaps flag it if they find it especially helpful. If a resource is relevant for more than one event, and shows up on multiple checklists, then checks and flags would appear for all instances.

Drag-and-drop exercises

Some of my students construct elaborate sets of flashcards so they can arrange events or geological time intervals spatially. Why not save them the trouble of making flashcards?

Students could opt to practice remembering by visiting the Timefleet Academy (C). They would do exercises such as:

  • Dragging coloured blocks labeled with Geological Time Scale divisions to put them in the right order
  • Dragging events to either put them in the correct chronological order (lower difficulty) or to position them in the correct location on the timeline (higher difficulty)
  • Dragging dates from a bank of options onto the Geological Time Scale or onto specific events (very difficult)

Upon completion of each of the drag-and-drop exercise, students would see which parts of their responses were correct. Problem areas (for example, a geological time period in the wrong order) would be marked on My Timeline with a white outline (D) so students could review those events in the appropriate context. White outlines could be cleared directly by the student, or else by successfully completing Timefleet Academy exercises with those components.

Drag-and-drop exercises would include some randomly selected content, as well as items that the student has had difficulty with in the past. The difficulty of the exercises could be scaled to respond to increasing skill, either by varying the type of drag-and-drop task, or by placing time limits on the exercise. Because a student could become very familiar with one stretch of geologic time without knowing others very well, the tool would have to detect a change in skill level and respond accordingly.

A bit of motivation

Students would earn points for doing Timefleet Academy exercises. To reward persistence, they would earn points for completing the exercises, in addition to points for correct responses. Points would accumulate toward a progression through Timefleet Academy ranks, beginning with Time Cadet, and culminating in Time Overlord (and who wouldn’t want to be a Time Overlord?). Progressive ranks could be illustrated with an avatar that changes appearance, or a badging system. As much as I’d like to show you some avatars and badges, I am flat out of creativity, so I will leave it to your imagination for now.

Categories: Assessment, Learning strategies, Learning technologies | Tags: , , , , | Leave a comment

Blog at WordPress.com.