Posts Tagged With: formative assessment

Clear As Fine-Grained Sediment Mixed With Water: A Discussion Forum

This week I’m presenting a poster at the Earth Educators’ Rendezvous. The poster is about a discussion forum activity that I do with my introductory physical geology students at St. Peter’s College. I’ve turned my poster into a blog post just in case anyone is thinking about trying a similar activity and would like to refer back to it. Alternatively, folks may simply want to confirm that some nut at an academic meeting designed a poster consisting largely of cartoons. Either way, here it is.Intro

Why

How

You can download a copy of the handout for this activity, including the rubric, here.

Examples.png

Strategies

This is a great resource from the University of Wisconsin-Stout for explaining online etiquette to students.

summary

Categories: Assessment, Teaching strategies | Tags: , , , , , | Leave a comment

The Levitating Wiener Standard of Formative Assessment

Formative assessment, or informative assessment, as I like to call it, is the kind of evaluation you use when it’s more important to provide someone with information on how to improve than it is to put a number next to a name. Formative assessment might or might not include a grade, but it will include thoughtful and actionable feedback. Formative assessment of teachers is no less important than formative assessment of learners- both are needed for the magic to happen.

I struggle with how to get truly useful formative feedback from my students. There are different instruments for evaluating teaching, including SEEQ (the Students’ Evaluation of Educational Quality), but the problem with the instruments I’ve used is that they don’t provide specific enough information. Sure, there is a place where students can write comments to supplement the boxes they’ve checked off elsewhere on the form, but those spaces are often left blank, and when they’re not blank, they don’t necessarily say anything actionable.

I’ve concluded that I need to design my own questionnaires. But when I get down to the business of writing questions, it feels like an impossible task to design a survey that will get at exactly what I want to know. I do have a pretty high standard, however: the levitating wiener.

The mentalist and magician Jose Ahonen performs a magic trick where he presents a levitating wiener to dogs. You can watch the videos How Dogs React to Levitating Wiener (parts 1 and 2) below. These are fascinating videos… have a look.

The dogs in the videos have one of three reactions:

  1. It’s a wiener! Gimme that wiener! These dogs react as one might expect, focusing on the existence of the wiener rather than on the fact that it is levitating.
  1. How the heck are you doing that? These dogs ignore the wiener and focus on the palms of Jose’s hands instead. It’s as though they’ve decided that it doesn’t make sense for a wiener to be levitating, and he must be doing it by holding strings. In other words, these dogs are trying to figure out how he’s doing the trick, and they all seem to have the same hypothesis. (Incidentally, it’s probably the first hypothesis most humans would come up with.)
  1. This is wrong… it’s just so wrong. These dogs watch for a moment and then get the heck out of there. Like the dogs in group 2 they also don’t think wieners should levitate, but they are too appalled by the violation of normality to formulate a hypothesis and investigate.

To my mind, most of the teaching assessment instruments are more like having the dogs fill out the questionnaire below than watching them interact with a levitating wiener.

Formative assessment for levitating wieners (loosely based on the SEEQ questionnarie)

Formative assessment for levitators of wieners

If the participants checked “agree” or “strongly agree” for “Weiners should not levitate,” it could mean something different for each dog. A dog from group 1 might object to having to snatch the wiener out of the air as opposed to having it handed to him. A dog from group 2 might think the question is asking about whether wieners are subject to gravity. A dog from group 3 might be expressing a grave concern about witchcraft. If the dogs wrote comments (we’re assuming literate doggies here), their comments might clarify the reasons behind their responses. Or they might just say there should be more wieners next time.

Now contrast the questionnaire with the experiment shown in the videos. Because of the experimental design, I learned things that I wouldn’t even have thought to ask about- I just assumed all dogs would react like group 1. I learned things the dogs themselves might never have written in their questionnaires. A dog from group 2 might not have noted his interest in the engineering problems surrounding hovering hot dogs in the “Additional comments” section. It might not have occurred to a dog from group 3 to mention that he was frightened by floating frankfurters. Maybe neither dog knew these things about himself until he encountered a levitating wiener for the first time.

A formative assessment tool that is up to the levitating wiener standard would tell me things I didn’t even consider asking about. It would tell me things that students might not even realize about their experience until they were asked.  Aside from hiring a magician, any suggestions?

Categories: Assessment | Tags: , , | Leave a comment

Building assessments into a timeline tool for historical geology

In my last post I wrote about the challenges faced by undergraduate students in introductory historical geology. They are required to know an overwhelming breadth and depth of information about the history of the Earth, from 4.5 billion years ago to present. They must learn not only what events occurred, but also the name of the interval of the Geological Time Scale in which they occurred. This is a very difficult task! The Geological Time Scale itself is a challenge to memorize, and the events that fit on it often involve processes, locations, and organisms that students have never heard of. If you want to see a case of cognitive overload, just talk to a historical geology student.

My proposed solution was a scalable timeline. A regular old timeline is helpful for organizing events in chronological order, and it could be modified to include the divisions of the Geological Time Scale. However, a regular old timeline is simply not up to the task of displaying the relevant timescales of geological events, which vary over at least six orders of magnitude. It is also not up to the job of displaying the sheer number of events that students must know about. A scalable timeline would solve those problems by allowing students to zoom in and out to view different timescales, and by changing which events are shown depending on the scale. It would work just like Google Maps, where the type and amount of geographic information that is displayed depends on the map scale.

Doesn’t that exist already?

My first round of Google searches didn’t turn anything up, but more recently round two hit paydirt… sort of. Timeglider is a tool for making “zoomable” timelines, and allows the user to imbed media. It also has the catch phrase “It’s like Google Maps but for time,” which made me wonder if my last post was re-inventing the wheel.

ChronoZoom was designed with Big History in mind, which is consistent with the range of timescales that I would need. I experimented with this tool a little, and discovered that users can build timelines by adding exhibits, which appear as nodes on the timeline. Users can zoom in on an exhibit and access images, videos, etc.

If I had to choose, I’d use ChronoZoom because it’s free, and because students could create their own timelines and incorporate timelines or exhibits that I’ve made. Both Timeglider and ChronoZoom would help students organize information, and ChronoZoom already has a Geological Time Scale, but there are still features missing. One of those features is adaptive formative assessments that are responsive to students’ choices about what is important to learn.

Learning goals

There is a larger narrative in geological history, involving intricate feedbacks and cause-and-effect relationships, but very little of that richness is apparent until students have done a lot of memorization. My timeline tool would assist students in the following learning goals:

  • Memorize the Geological Time Scale and the dates of key event boundaries.
  • Memorize key events in Earth history.
  • Place individual geological events in the larger context of Earth history.

These learning goals fit right at the bottom of Bloom’s Taxonomy, but that doesn’t mean they aren’t important to accomplish. Students can’t move on to understanding why things happened without first having a good feeling for the events that took place. It’s like taking a photo with the lens cap on- you just don’t get the picture.

And why assessments?

This tool is intended to help students organize and visualize the information they must remember, but they still have to practice remembering it in order for it to stick. Formative assessments would give students that practice, and students could use the feedback from those assessments to gauge their knowledge and direct their study to the greatest advantage.

How it would work

The assessments would address events on a timeline that the students construct for themselves (My Timeline) by selecting from many hundreds of events on a Master Timeline. The figure below is a mock-up of what My Timeline would look like when the scale is limited to a relatively narrow 140 million year window. When students select events, related resources (videos, images, etc.) would also become accessible through My Timeline.

Timeline interface

A mock-up of My Timeline. A and B are pop-up windows designed to show students which resources they have used. C is access to practice exercises, and D is how the tool would show students where they need more work.

Students would benefit from two kinds of assessments:

Completion checklists and charts

The problem with having abundant resources is keeping track of which ones you’ve already looked at. Checklists and charts would show students which resources they have used. A mouse-over of a particular event would pop up a small window (A in the image above) with the date (or range of dates) of the event and a pie chart with sections representing the number of resources that are available for that event. A mouse-over on the pie chart would pop up a hyperlinked list of those resources (B). Students would choose whether to check off a particular resource once they are satisfied that they have what they need from it, or perhaps flag it if they find it especially helpful. If a resource is relevant for more than one event, and shows up on multiple checklists, then checks and flags would appear for all instances.

Drag-and-drop exercises

Some of my students construct elaborate sets of flashcards so they can arrange events or geological time intervals spatially. Why not save them the trouble of making flashcards?

Students could opt to practice remembering by visiting the Timefleet Academy (C). They would do exercises such as:

  • Dragging coloured blocks labeled with Geological Time Scale divisions to put them in the right order
  • Dragging events to either put them in the correct chronological order (lower difficulty) or to position them in the correct location on the timeline (higher difficulty)
  • Dragging dates from a bank of options onto the Geological Time Scale or onto specific events (very difficult)

Upon completion of each of the drag-and-drop exercise, students would see which parts of their responses were correct. Problem areas (for example, a geological time period in the wrong order) would be marked on My Timeline with a white outline (D) so students could review those events in the appropriate context. White outlines could be cleared directly by the student, or else by successfully completing Timefleet Academy exercises with those components.

Drag-and-drop exercises would include some randomly selected content, as well as items that the student has had difficulty with in the past. The difficulty of the exercises could be scaled to respond to increasing skill, either by varying the type of drag-and-drop task, or by placing time limits on the exercise. Because a student could become very familiar with one stretch of geologic time without knowing others very well, the tool would have to detect a change in skill level and respond accordingly.

A bit of motivation

Students would earn points for doing Timefleet Academy exercises. To reward persistence, they would earn points for completing the exercises, in addition to points for correct responses. Points would accumulate toward a progression through Timefleet Academy ranks, beginning with Time Cadet, and culminating in Time Overlord (and who wouldn’t want to be a Time Overlord?). Progressive ranks could be illustrated with an avatar that changes appearance, or a badging system. As much as I’d like to show you some avatars and badges, I am flat out of creativity, so I will leave it to your imagination for now.

Categories: Assessment, Learning strategies, Learning technologies | Tags: , , , , | Leave a comment

The Poll Everywhere experiment: After day 15 of 15

The marathon geology class is over now, and I have a few observations about the Poll Everywhere experience. These things would have helped me had I known them in advance, so here they are in case someone else might benefit.  Some of the points below are also applicable to classroom response systems in general.

Getting started

Signing up the students

As I mentioned in a previous post, this went fairly smoothly.  One reason is that the process is an easy one, but another reason is that there were enough devices for students to share in order to access the website for creating an account and registering. While students can use an “old-fashioned” cell phone without a browser to text their answers, they can’t use that device to get set up in the first place. I used my iPad to get two of the students started, and students shared their laptop computers with each other. My class was small (33 students), so it was relatively easy to get everyone sorted.  If the class is a large one this could be a challenge. I would probably have the students sign up in advance of class, and then be willing to write off the first class for purposes of troubleshooting with those who couldn’t get the process to work for themselves.

Voter registration

One thing I would do differently is to have students register as a voter regardless of whether they plan to use their browser to respond to questions or not. I told the students who would be texting that all they needed to do was have their phone numbers certified. This is true, and they appeared on my list of participants. The problem has to do with the students who are responding using a web browser. If they forgot to sign in then they showed up on my list anonymously as an unregistered participant. More than one student did this, so it wasn’t possible to know which student entered which answers.

If everyone were registered as a voter, then I could have selected the option to allow only registered participants to answer the questions. Those not signed in would not be able to answer using their browsers, and they would be reminded that signing in was necessary. The reason I didn’t use this option is that students texting their answers are prevented from responding unless they have also registered as voters. I could have had them go back and change their settings, but I opted instead to put a message on the first question slide of each class in large, brightly coloured letters reminding students to sign in. I also reminded them verbally at the start of class.

Grading responses

With the Presenter plan students’ responses were automatically marked as correct or incorrect (assuming I remembered to indicate the correct answer). Under “Reports” I was able to select questions and have students’ responses to those questions listed, and a “yes” or “no” to whether they got the right answer. The reports can be downloaded as a spreadsheet, and they include columns showing how many questions were asked, how many the student answered, and how many the student got correct. There is a lot of information in the spreadsheet, so it isn’t as easy as I would have liked to get a quick sense of who was having difficulty with what kind of question. Deleting some columns helped to clarify things.

In the end I didn’t use the statistics that Poll Everywhere provided. I was having difficulty sorting out the questions that were for testing purposes from the ones that were for discussion purposes. Maybe a “D” or “T” at the beginning of each question would have made it easier to keep track of which was which when selecting questions for generating the report. I could have used the statistics if I had generated separate reports for the discussion questions and the testing questions. Instead I made myself a worksheet and did the calculations manually. This approach would not scale up well, but it did make it a lot easier for me to see how individual students were doing.

Integrity of testing

Timed responses

At the outset I decided that it would be extremely inconvenient to have students put their notes away every time they had to respond to a testing question. My solution was to limit the time they had to respond to testing questions. I figured that if they didn’t know the answer, that would at least restrict how much they flipped through their notes.  It also helps to ask questions where the answer isn’t something they can look up.   It turned out that 25 seconds was a good time limit, although they got longer than that because I took time to explain the question and the possible responses. (I wanted to make sure that if they got the answer wrong it reflected a gap in their knowledge rather than a misunderstanding of what the question was asking or what the responses meant.)

There is a timer that can be set.  One way to set it is when using the Poll Everywhere Presenter App… if you can manage to click on the timer before the toolbar pops up and gets in your way. (I never could.) It can also be set when viewing the question on the Poll Everywhere website. The timer starts when the question starts, which means you have to initiate the question at the right time, and can’t have it turned on in advance. With the work-around I was using, there were too many potential complications, so I avoided the timer and either used the stopwatch on my phone or counted hippopotamuses.

Setting the correct answer to display

If you set a question to be graded, students can see whether or not they got the correct answer, but you have options as to when they see it. I noticed that by default there is a one-day delay between when the question is asked and when the answer is shown (under “Settings” and “Response History”). I wanted the students to be able to review their answers on the same day if they were so inclined, so I set an option to allow the correct answer to be shown immediately. The problem, I later discovered, is that if one student responds and then checks his or her answer, he or she can pass on the correct answer to other students.

Ask a friend

Another issue with the integrity of testing done using Poll Everywhere (or any classroom response system) is the extent to which students consult with each other prior to responding. I could have been particular on this point, and forbidden conversation, but the task of policing the students wasn’t something I was keen on doing. Judging by the responses, conversing with one’s neighbour didn’t exclude the possibility of both students getting the answer wrong. In a large class it would be impossible to control communications between students, which is one of the reasons why any testing done using this method should probably represent only a small part of the total grade.

Who sees what when

There are two ways to turn a poll on, and they each do different things. To receive responses, the poll has to be started. To allow students to respond using their browsers, the poll has to be “pushed” to the dedicated website. It is possible to do one of these things without doing the other, and both have to be done for things to work properly. The tricky part is keeping track of what is being shown and what is not. If a question is for testing purposes then you probably don’t want it to be displayed before you ask it in class.

When you create a poll, it is automatically started (i.e., responses will be accepted), but not pushed. Somewhere in the flurry of setting switches I think I must have pushed some polls I didn’t intend to. I also noticed one morning as I was setting up polls that someone (listed as unregistered) had responded to a question I had created shortly before.   As far as I knew I hadn’t pushed the poll, so…?  The only explanation I can think of is that someone was responding to a different poll and texted the wrong number.  Anyway, as an extra precaution and also to catch any problems at the outset, I made the first question of the day a discussion question. Only one question shows at a time, so as long as the discussion question was up, none of the testing questions would be displayed.

Oops

One other thing to keep in mind is to check before asking a question that one hasn’t written the answer on the board. If the class suddenly goes very quiet and the responses come in as a flood, that’s probably what has happened.

Accommodating technology and life

Stuff happens. If a student misses class, he or she will also miss the questions and the points that could have been scored for answering them. If the absence is for an excusable reason (or even if it isn’t) a student might ask to make up the missed questions. As this would take the form of a one-on-one polling session, and the construction of a whole suite of new questions, I knew it was something I didn’t want to deal with.

One could simply not count the missed questions against the student’s grade, but that wasn’t a precedent I wanted to set either. Instead I stated in the syllabus that there would not be a make-up option, but that each student would have a 10-point “head start” for the Poll Everywhere questions. Whatever the student’s score at the end of the course, I added 10 points, up to a maximum of a 100% score. I had no idea how many questions I would be asking, so 10 points was just a guess, but it ended up covering the questions for one day’s absence, which is not unreasonable.

Another thing the 10 points was intended to do was offset any technological problems, like a student’s smart phone battery dying at an inopportune moment, or someone texting the wrong number by accident, or accidentally clicking the wrong box on the browser. The 10 points also covered miscalculations on my part, such as making a testing question too difficult.

I still ended up forgiving missed questions in two cases: one because of a scheduling conflict with another class, and the other on compassionate grounds.

The verdict

I will be teaching in September, and I plan to use Poll Everywhere again. Even if it happens that my classroom is outfitted with a receiver for clickers, I’ll still stay with Poll Everywhere.  For one, my questions are already set up, ready and waiting online. Another reason is the flexibility of being able to show a question without actually showing the poll (i.e., the window with questions and responses that the Poll Everywhere software creates). This started out as a “duct tape” fix for a technical problem, but in the end I think I prefer it because I have more control over what can fit on the slide. As far as I know, Turning Point questions (the usual clicker system) can’t be started unless the slide that will show the results is the current slide.

One more reason is that the system will be free for students to use, outside of whatever data charges they might incur. I will either cover the cost myself, or, if there is no Turning Point option, attempt to convince the school to do it. A plan exists where the students can pay to use the system, but I’d like to avoid that if possible. On the off chance that something goes horribly wrong and I can’t get it working again, I’d prefer to not have students wondering why they had to pay for a service that they can’t use.

Over all, I really like the idea of having a diagnostic tool for probing brains (also referred to as formative assessment, I think). I suppose my teaching process is similar to the one I use for debugging computer code: I perturb the system, observe the output, and use that to diagnose what the underlying problem might be. Poll Everywhere is not the only tool that can do this, but it is probably the one I will stick with.

Categories: Learning technologies | Tags: , , , , , | 1 Comment

The Poll Everywhere experiment: After day 3 of 15

Tech godsThis month I am teaching an introductory physical geology course that could be called “All you ever wanted to know about geology in 15 days.” It is condensed into the first quarter of the Spring term, and so compressed into 15 classes in May.

I decided to use an classroom response system this time. I like the idea of being able to peer into the black box that is my students’ learning process, and fix problems as they arise. I also like that I can challenge them with complex questions. Students get points for answering the really hard ones regardless of whether they get the right answer or not (and sometimes there is more than one reasonable answer).

Classroom response systems often involve the use of clickers, but my classroom doesn’t have a receiver, and I didn’t want to spend $250 to buy a portable one. Instead I decided to try Poll Everywhere. It is an online polling tool that can be used to present questions to students, collect their responses, display the frequency of each response, and, for a fee, tell me who answered what correctly.  An advantage of Poll Everywhere is that students can use the devices they already have to answer questions, either from a web browser or by sending text messages.

The obvious snag, that someone didn’t have the requisite technology, didn’t occur, and setting up the students was far easier than I thought it would be.  I’ve noticed that many are now texting their answers rather than using their browsers, even though most planned to use their browsers initially. None have asked for my help with getting set up for text messaging, and that would be an endorsement for any classroom technology in my books.

My experience with the service has not been as smooth. It is easy to create poll questions, but the window that pops up to show the poll isn’t as easy to read as I would like it to be. The main problem, however, is that I can’t actually show students the polls. Aside from one instance involving random button pushing that I haven’t been able to reproduce, the polls show up on my computer, but are simply not projected onto the screen at the front of the classroom. I’ve looked around online for a solution, but the only problem that is addressed is polls not showing up on PowerPoint slides at all, which is not my issue.  On the advice of Poll Everywhere I have updated a requisite app, but to no avail.

The work-around I’ve come up with is to make my own slides with poll questions and the possible responses. Normally, advancing to the slide on which the poll appears would trigger the poll. Instead I trigger and close the poll from my Poll Everywhere account using an iPad.  I haven’t yet tried exiting PowerPoint and showing the poll using the app, then going back to PowerPoint, because after I connect to the projector, I can’t seem to control the display other than to advance slides.

As a classroom tool, I have found the poll results to be useful already, and I was able to make some clarifications that I wouldn’t otherwise have known were necessary. I would like to look at the results in more detail to check on how each student is doing, but with all the time I’ve been spending on troubleshooting and building additional slides, I haven’t got to it yet.

It is possible that my technical problems are not caused by Poll Everywhere. All aspects of the polling system that are directly under their control have worked great. I’m curious whether I can get the polls to show up if I use a different projector, or whether other objects like videos would show on the projector I’m using now, but I have limited time to invest in experiments. This is where I’m supposed to say that I’ve learned my lesson and will henceforth test-drive new technology from every conceivable angle before actually attempting to use it in a way that matters. Only, I thought I had tested it: I ran polls in PowerPoint multiple times on my own computer, doing my best to find out what would make them not work and how to fix it. I also answered poll questions from a separate account using my computer, an iPad, and by texting to find out what the students’ experience would be and what challenges it might involve… but I never thought to check whether the projector would selectively omit the poll from the slide.  Who would have thought?

Categories: Learning technologies | Tags: , , , , , | 1 Comment

Blog at WordPress.com.