Posts Tagged With: problem solving skills

The Mission (Geo)Impossible Scavenger Hunt

It was a Saturday morning like any other and my husband and I were enjoying a cup of coffee while he channel surfed to find a program related to disassembling and reassembling automobiles. He paused on a channel showing the movie Smokey and the Bandit, a classic film from 1977 about an epic beer run between Atlanta and Texarkana. “I wonder if I drove that road,” he said.

So we looked at Google Earth and found that there were two possible highways that Smokey and the Bandit could have used to move their beer. And then I saw it: the intervening space had a variety of superposed plunging folds. The seed for Mission (Geo)Impossible was planted the moment I began to wonder how I might lead students on a path to make that discovery for themselves. I don’t recall whether it was I or my husband who came up with the actual notion of torturing challenging students with a scavenger hunt for information, but it certainly appealed to my nefarious side.

What is it, exactly?

Download the handout here.

Mission (Geo)Impossible is a series of 19 quests that teams of students complete for extra credit. Why 19? I like prime numbers. 17 seemed to few, and 23 was too many. The first time around the optimal number of quests was one of many unconstrained variables. Why extra credit? Because when I make up the quests I honestly have no idea whether students will be able to do them. They are meant to be challenging problems, and are of a type that I’ve never seen as part of an assessment or activity. Students go into this knowing it will be difficult (I make sure they know), and do so by their own choice so I can feel a little less guilty about how hard they work.

Why on Earth would students want to do this?

The enticement for them to try Mission (Geo)Impossible is a substantial bonus on their final grade. If their team completes all 19 quests, 2.5% is added to their grade. That means a 60% becomes a 62.5%. If their team finishes first, they get another 2.5% for a total of 5%.

That might seem like a lot, and I wrestled with whether this was appropriate or not, but in the end I decided it was legitimate for three reasons. First, it is a term-long project and they work very hard on it. Second, to complete it they must learn a lot of geology and do synthesis tasks at a level that I would never ask of students in an introductory physical geology class under other circumstances. Finally, I’ve applied similar curves to final grades, and with serious misgivings. To my mind, this extra credit work is a heck of a lot more legitimate than bumping grades so the class average falls in the magical 60% to 65% range.

I also try to entice them by imbuing the whole undertaking with a spirit of playful competition. Students are competing with me- I tell them I designed the quests to mess with them (true), and challenge them to beat me. They are also competing with their classmates. There is a bit of secret agent role-playing, too. It is Mission (Geo)Impossible, after all. They “activate” their teams by emailing a team name and roster to Mission (Geo)Impossible Command Central, and there is a Quest Master who confirms their activation.

How does it work?

The mechanics of the scavenger hunt are designed to keep the level of work manageable for me, to keep my interactions with teams as fair as possible, and also to leave students to their own devices. Those devices turn out to be very good, and likely better than students realize themselves, which is a big reason why I like this activity.

To begin with, I post a pdf containing 19 quests on the course website. The procedure they follow is to email their quest solutions to Mission (Geo)Impossible Command Central, and the Quest Master responds with one of three words: “correct,” “incorrect,” or “proceed.” “Proceed” means some part of their answer is correct, or they are going in the right direction, but I don’t provide any information about what they’re doing right. That keeps me from having to worry about whether I’ve given one team more of a clue than another.

They can submit as many solutions as they like, and they have taken advantage of this in interesting ways. One team submitted “anagram” as their first attempt on a quest. They were trying to figure out what sort of puzzle they were solving. If they had gotten a “proceed” they’d know it was an anagram. The puzzle turned out to be a substitution cipher rather than an anagram, but it was a clever approach nonetheless.

So what do these puzzles look like?

The quests specify a target (a general thing to aim at), and deliverables (what students must submit). Then they give the clue.

Here’s an example of one quest that they solved relatively easily:

Lisbon

Solution: Earthquake, Lisbon, Portugal

The key to this quest is realizing that the minerals can be assigned a number using the Mohs hardness scale. In the order the minerals appear, those numbers are 1, 7, 5, and 5… or 1755, a year. Students could google “events in 1755,” they might actually know what happened, or they might have read the syllabus and found the sidebar I included about the earthquake in Lisbon, Portugal, that happened on 1 November, 1755.

Here is another one. It proved a bit more challenging for some students.

dancing men

Solution: Paricutin. It’s a cinder cone while the others are stratovolcanoes.

If you’re a fan of Sherlock Holmes, you’ll recognize this as the cipher from The Adventure of the Dancing Men. Solving the cipher gives the following rows of letters:

PINATUBORA

INIERFUJIY

AMAPARICUT

IN

If you break up the rows differently, you can get this:

PINATUBO

RAINIER

FUJIYAMA

PARICUTIN

These are the names of volcanoes. It’s possible students will recall what I’ve said about those volcanoes in class, and immediately realize that the first three are stratovolcanoes, while the last is a cinder cone. On the other hand, the solution might involve looking up each volcano, listing the important characteristics, noticing that Parícutin is a cinder cone while the others are not, and verifying that stratovolcano versus cinder cone is an important distinction. The latter scenario requires a lot of work and ends in a very clear idea about the difference between a stratovolcano and a cinder cone.

Anything that can be googled will be googled

When designing these quests there were a few things I wanted to accomplish. One was that students from a variety of backgrounds and with a variety of interests would be a valuable part of the solution. In fact, I wanted them to realize something very specific: that their background and perspective, whether they considered themselves “science people” or not, was indeed valuable for figuring out a puzzle about science.

To make Mission (Geo)Impossible a meaningful exercise, it was important that students could not simply look up the answer somewhere. As far as possible, I tried to make the clues things that could not be put into a search engine, or something that could be searched, but would only give another clue to the problem. At first blush, this might sound next to impossible, but here’s an example of something unsearchable:

branches

Detail of a painting at St. Peter’s College

This is a blurry photograph of a corner of a painting. It’s a painting that students walk by daily. The photo is of tree branches, but they aren’t necessarily recognizable as such. There is simply nothing about this that gives you a searchable string. Students would have to recognize the painting, and proceed from there. In this case the deliverable was the age of bedrock beneath the College. Students had to realize that the painting was giving them a location, and then look at a geologic map.

Here are a few other things I kept in mind:

No extraneous information

I didn’t include things that weren’t relevant to the quest. At least not on purpose. The quests were hard enough, and there wasn’t anything to be accomplished by sending students on a false path. They did that on their own often enough.

No process of elimination

I wouldn’t give them a quest in the style of multiple choice because they could simply keep guessing until they got the right answer. Where quests had a finite number of options, there was either work involved to get those options (like the dancing men quest), or work involved in explaining a choice (ditto the dancing men).

Don’t restrict the quests to things explicitly addressed in class.

There is value in extrapolating knowledge and building on it. For example, in the case of Smokey and the Bandit, the plunging folds are easy enough to pick out with some searching, if you know what you’re looking for. However, the plunging folds I show in class are of the “textbook” variety. The ones between Atlanta and Texarkana are much more complex, but still discoverable if students think carefully about how plunging folds are expressed at Earth’s surface. In the end, they found the folds.

Use a wide variety of clues and puzzle types

As best I could, I used clues that involved a wide range of topics (literature, art, science, popular culture of the 1970s). I used puzzles that would appeal to different ways of thinking. Some involved interpreting images to get a word or phrase. For example, a pile of soil next to an apple core would be interpreted as “earth” and “core.” Some were ciphers, and some involved recognizing objects. Some were narratives, like the one below. Students used the stories to get the differences in timing between P-wave and S-wave arrivals, then used triangulation to find the location of an earthquake. But they had to find a map of Middle Earth first, and do some km to miles conversions.

earthquake

It was an earthquake in Fangorn Forest.

 

So how did this go over with the victims students?

My class was never more than 23 students, and the uptake was 2-3 active teams each time. I would need surveillance throughout the College to see exactly how they responded to the quests (and I’m not sure I’d like what I’d hear). But from conversations with students it seemed there was the right amount of frustration to make solving the quests feel like an accomplishment. In all but one case, teams that started Mission (Geo)Impossible also finished it, or else ran out of time trying.

 

They submitted solutions at 5:30 in the morning, 11:00 in the evening, and sometimes during the lecture. They brought their quests to the lecture in case I dropped a hint. They came to visit me and said things like, “This is driving me crazy,” and “Why, Karla? Why?” I successfully (I think) suppressed a diabolical grin on most occasions. In fact, they put so much work into this that I felt bad about it from time to time. But it was an optional activity, I rationalized.

Wiggle room

When I started this I had no idea whatsoever whether students would be successful, but I did intend to supply a safety net if it was needed, and make sure their work was rewarded. This is my policy with everything I try in my courses.

In the first iteration things bogged down part way through the term, so to get students going again, I gave them an option: they could request one additional clue to a quest of their choice, or they could request clues for three quests, but I would pick which ones, and I wouldn’t tell them which I chose. (Heh heh.)

Naturally, the teams negotiated an arrangement whereby they sorted out which combination of options would work out to their collective advantage, and then they shared the information. At that point I was very glad I insisted on teams rather than letting individuals play, because as individuals they could conceivably ask for enough clues to specific quests to beat the system.

 

In the second iteration, I tried a new style of puzzles that turned out to be more difficult than I intended. By the end of the term, and after a massive effort, the teams were only about half way through. In that case I awarded the team with the most quests the 5% and 2.5% to the other team.

 

The third iteration

I will do this again, but with fewer puzzles (13- still a prime number), and with fewer difficult quests than last time. I will also give students some examples of quests from previous iterations. I’m hoping that will convince more students to get involved.

I won’t relax the rule about participating in teams. I tried that the second time around, and the individual participants either did not get started, or got hopelessly off on the wrong track. I do need to find a solution for students who want to participate, but aren’t comfortable approaching other students in the class who they don’t know.

But I will find a way to get as many students involved as possible, because the potential for this activity to give students confidence in their ability to approach difficult tasks- even seemingly impossible ones- is just too important.

Oh yes, and by the way…

I dare you.

dare

Deliverable: x + y + z

Categories: Challenges, Learning technologies, Teaching strategies | Tags: , , , , , | 2 Comments

When good grades are bad information

Assignment grades versus exam gradesThis week I set out to test a hypothesis. In one of my distance education courses, I regularly get final exam scores that could pass for pant sizes. I have a few reasons to suspect that the exam itself is not to blame. First, it consists of multiple-choice questions that tend toward definitions, and general queries about “what,” rather than “why” or “how.” Second, the exam questions come directly from the learning objectives, so there are no surprises. Third, if the students did nothing but study their assignments thoroughly, they would have enough knowledge to score well above the long-term class average. My hypothesis is that students do poorly because the class is easy to put on the back burner. When the exam comes around, they find themselves cramming a term’s worth of learning into a few days.

Part of the reason the class is easy to ignore is that the assignments can be accomplished with a perfunctory browsing of the textbook. In my defense, there isn’t much I can do about fixing the assignments.  Someone above my pay grade would have to start the machinery of course designers, contracts, and printing services. In defense of the course author, I’m not entirely sure how to fix the assignments. If a student were so inclined (and some have been), the assignments could be effective learning tools.

Another problem is that students tend to paraphrase the right part of the textbook.  Even if I suspect that they don’t understand what they’ve written, I have few clues about what to remedy.  The final result is that students earn high grades on their assignments. If they place any weight at all on those numbers, I fear they seriously overestimate their learning, and seriously underestimate the amount of work they need to put into the class.

So, back to testing my hypothesis: I decided to compare students’ averages on assignments with their final exam scores. I reasoned that a systematic relationship would indicate that assignment scores reflected learning, and therefore the exam was just too difficult. (Because all of the questions came undisguised from the learning objectives, I eliminated the possibility that a lack of relationship would mean the exam didn’t actually test on the course material.)

I also went one step further, and compared the results from this course (let’s call it the paraphrasing course) with another where assignments required problem-solving, and would presumably be more effective as learning tools (let’s call that the problem-solving course).

My first impression is that the paraphrasing course results look like a shotgun blast, and the problem-solving course results look more systematic. An unsophisticated application of Excel’s line fitting suggests that 67% of the data for the problem-solving course can be explained if assignment grades reflect knowledge gained, while only 27% of the data from the paraphrasing course can be explained that way.

I’m hesitant to call the hypothesis confirmed yet, because the results don’t really pass the thumb test. In the thumb test you cover various data with your thumb to see if your first impression holds. For example, if you cover the lowest exam score in the paraphrasing course with your thumb, the distribution could look a little more systematic, albeit with a high standard deviation. If you cover the two lowest exam scores in the problem-solving course, the distribution looks a little less so. There is probably a statistically sound version of the thumb test (something that measures how much the fit depends on any particular point or set of points, and gives low scores if the fit is quite sensitive) but googling “thumb test” hasn’t turned it up yet.

From looking at the results, I’ve decided that I would consider a course to be wildly successful if the grades on a reasonably set exam were systematically higher than the grades on reasonably set assignments— it would mean that the students learned something from the errors they made on their assignments, and were able to build on that knowledge.

 

Categories: Assessment, Distance education and e-learning | Tags: , , , , , | 2 Comments

Wreck it good

nasty equationThis week I graded an exceptionally well-written exam.  The student used exam-writing and study strategies that I’ve found to be effective in my own experience.  This got me reflecting on my time as a student and I remembered the one thing that helped me more than any other skill or strategy that I developed: wreck it good.

Proclaimed in the same spirit as “git’er done!” (and therefore exempt from the usual grammatical rules), “wreck it good” was my license to fail.  Not only that, it made failure an imperative, which turned out to be a very good thing.

Here’s the scenario: I was taking a course in numerical modeling as part of my doctorate at Penn State.  The course included assignments that required writing computer code to simulate a variety of natural systems and processes.  I’d had some experience with programming, but the programming environment was new to me, and the application was also new.  Coding can be frustrating and challenging.  From time to time, my code produced such bizarre results that I had to remind myself that the computer was doing what I told it to, and not manifesting malicious intent.

As stressful as the course was (my husband claims it took five years off my lifespan), I look back on it with only positive feelings as a result of having given myself permission to fail.  It started out as a matter of pride.  I didn’t want to ask for help with my code only to hear “Did you try [insert obvious course of action that didn’t occur to me]?”  I resolved instead that I would try everything I could think of and make a complete and utter disaster of my code if necessary—I would wreck it good.  Then I could ask for help with the confidence that either the computer was broken, or the task was impossible when viewed from every conceivable angle by a normal human being.  I was nothing if not thorough.

A strange thing happened on the way to wrecking it good… I actually began to have fun with troubleshooting my code.  There was no risk involved in failure, because I could ask for help at any time.  That meant troubleshooting was more about exploring possibilities than fixing problems… and oddly enough, despite my best efforts, I never did wreck it good enough to need my instructor’s help.  It was very empowering to find out over and over again that however ugly and impossible the problem looked initially, I could handle it.  Bring it on, partial differential equation… cause I’m going to wreck you good!

Wrecking it good isn’t just for computer programming.  It works great for doing battle with math problems, or for posing challenging study questions to diagnose knowledge gaps.  It makes sense to try to fail—it is a full frontal attack on your learning challenges, and they won’t stand a chance.

Categories: Learning strategies | Tags: , , | 2 Comments

Blog at WordPress.com.