Posts Tagged With: critical thinking

The Mission (Geo)Impossible Scavenger Hunt

It was a Saturday morning like any other and my husband and I were enjoying a cup of coffee while he channel surfed to find a program related to disassembling and reassembling automobiles. He paused on a channel showing the movie Smokey and the Bandit, a classic film from 1977 about an epic beer run between Atlanta and Texarkana. “I wonder if I drove that road,” he said.

So we looked at Google Earth and found that there were two possible highways that Smokey and the Bandit could have used to move their beer. And then I saw it: the intervening space had a variety of superposed plunging folds. The seed for Mission (Geo)Impossible was planted the moment I began to wonder how I might lead students on a path to make that discovery for themselves. I don’t recall whether it was I or my husband who came up with the actual notion of torturing challenging students with a scavenger hunt for information, but it certainly appealed to my nefarious side.

What is it, exactly?

Download the handout here.

Mission (Geo)Impossible is a series of 19 quests that teams of students complete for extra credit. Why 19? I like prime numbers. 17 seemed to few, and 23 was too many. The first time around the optimal number of quests was one of many unconstrained variables. Why extra credit? Because when I make up the quests I honestly have no idea whether students will be able to do them. They are meant to be challenging problems, and are of a type that I’ve never seen as part of an assessment or activity. Students go into this knowing it will be difficult (I make sure they know), and do so by their own choice so I can feel a little less guilty about how hard they work.

Why on Earth would students want to do this?

The enticement for them to try Mission (Geo)Impossible is a substantial bonus on their final grade. If their team completes all 19 quests, 2.5% is added to their grade. That means a 60% becomes a 62.5%. If their team finishes first, they get another 2.5% for a total of 5%.

That might seem like a lot, and I wrestled with whether this was appropriate or not, but in the end I decided it was legitimate for three reasons. First, it is a term-long project and they work very hard on it. Second, to complete it they must learn a lot of geology and do synthesis tasks at a level that I would never ask of students in an introductory physical geology class under other circumstances. Finally, I’ve applied similar curves to final grades, and with serious misgivings. To my mind, this extra credit work is a heck of a lot more legitimate than bumping grades so the class average falls in the magical 60% to 65% range.

I also try to entice them by imbuing the whole undertaking with a spirit of playful competition. Students are competing with me- I tell them I designed the quests to mess with them (true), and challenge them to beat me. They are also competing with their classmates. There is a bit of secret agent role-playing, too. It is Mission (Geo)Impossible, after all. They “activate” their teams by emailing a team name and roster to Mission (Geo)Impossible Command Central, and there is a Quest Master who confirms their activation.

How does it work?

The mechanics of the scavenger hunt are designed to keep the level of work manageable for me, to keep my interactions with teams as fair as possible, and also to leave students to their own devices. Those devices turn out to be very good, and likely better than students realize themselves, which is a big reason why I like this activity.

To begin with, I post a pdf containing 19 quests on the course website. The procedure they follow is to email their quest solutions to Mission (Geo)Impossible Command Central, and the Quest Master responds with one of three words: “correct,” “incorrect,” or “proceed.” “Proceed” means some part of their answer is correct, or they are going in the right direction, but I don’t provide any information about what they’re doing right. That keeps me from having to worry about whether I’ve given one team more of a clue than another.

They can submit as many solutions as they like, and they have taken advantage of this in interesting ways. One team submitted “anagram” as their first attempt on a quest. They were trying to figure out what sort of puzzle they were solving. If they had gotten a “proceed” they’d know it was an anagram. The puzzle turned out to be a substitution cipher rather than an anagram, but it was a clever approach nonetheless.

So what do these puzzles look like?

The quests specify a target (a general thing to aim at), and deliverables (what students must submit). Then they give the clue.

Here’s an example of one quest that they solved relatively easily:

Lisbon

Solution: Earthquake, Lisbon, Portugal

The key to this quest is realizing that the minerals can be assigned a number using the Mohs hardness scale. In the order the minerals appear, those numbers are 1, 7, 5, and 5… or 1755, a year. Students could google “events in 1755,” they might actually know what happened, or they might have read the syllabus and found the sidebar I included about the earthquake in Lisbon, Portugal, that happened on 1 November, 1755.

Here is another one. It proved a bit more challenging for some students.

dancing men

Solution: Paricutin. It’s a cinder cone while the others are stratovolcanoes.

If you’re a fan of Sherlock Holmes, you’ll recognize this as the cipher from The Adventure of the Dancing Men. Solving the cipher gives the following rows of letters:

PINATUBORA

INIERFUJIY

AMAPARICUT

IN

If you break up the rows differently, you can get this:

PINATUBO

RAINIER

FUJIYAMA

PARICUTIN

These are the names of volcanoes. It’s possible students will recall what I’ve said about those volcanoes in class, and immediately realize that the first three are stratovolcanoes, while the last is a cinder cone. On the other hand, the solution might involve looking up each volcano, listing the important characteristics, noticing that Parícutin is a cinder cone while the others are not, and verifying that stratovolcano versus cinder cone is an important distinction. The latter scenario requires a lot of work and ends in a very clear idea about the difference between a stratovolcano and a cinder cone.

Anything that can be googled will be googled

When designing these quests there were a few things I wanted to accomplish. One was that students from a variety of backgrounds and with a variety of interests would be a valuable part of the solution. In fact, I wanted them to realize something very specific: that their background and perspective, whether they considered themselves “science people” or not, was indeed valuable for figuring out a puzzle about science.

To make Mission (Geo)Impossible a meaningful exercise, it was important that students could not simply look up the answer somewhere. As far as possible, I tried to make the clues things that could not be put into a search engine, or something that could be searched, but would only give another clue to the problem. At first blush, this might sound next to impossible, but here’s an example of something unsearchable:

branches

Detail of a painting at St. Peter’s College

This is a blurry photograph of a corner of a painting. It’s a painting that students walk by daily. The photo is of tree branches, but they aren’t necessarily recognizable as such. There is simply nothing about this that gives you a searchable string. Students would have to recognize the painting, and proceed from there. In this case the deliverable was the age of bedrock beneath the College. Students had to realize that the painting was giving them a location, and then look at a geologic map.

Here are a few other things I kept in mind:

No extraneous information

I didn’t include things that weren’t relevant to the quest. At least not on purpose. The quests were hard enough, and there wasn’t anything to be accomplished by sending students on a false path. They did that on their own often enough.

No process of elimination

I wouldn’t give them a quest in the style of multiple choice because they could simply keep guessing until they got the right answer. Where quests had a finite number of options, there was either work involved to get those options (like the dancing men quest), or work involved in explaining a choice (ditto the dancing men).

Don’t restrict the quests to things explicitly addressed in class.

There is value in extrapolating knowledge and building on it. For example, in the case of Smokey and the Bandit, the plunging folds are easy enough to pick out with some searching, if you know what you’re looking for. However, the plunging folds I show in class are of the “textbook” variety. The ones between Atlanta and Texarkana are much more complex, but still discoverable if students think carefully about how plunging folds are expressed at Earth’s surface. In the end, they found the folds.

Use a wide variety of clues and puzzle types

As best I could, I used clues that involved a wide range of topics (literature, art, science, popular culture of the 1970s). I used puzzles that would appeal to different ways of thinking. Some involved interpreting images to get a word or phrase. For example, a pile of soil next to an apple core would be interpreted as “earth” and “core.” Some were ciphers, and some involved recognizing objects. Some were narratives, like the one below. Students used the stories to get the differences in timing between P-wave and S-wave arrivals, then used triangulation to find the location of an earthquake. But they had to find a map of Middle Earth first, and do some km to miles conversions.

earthquake

It was an earthquake in Fangorn Forest.

 

So how did this go over with the victims students?

My class was never more than 23 students, and the uptake was 2-3 active teams each time. I would need surveillance throughout the College to see exactly how they responded to the quests (and I’m not sure I’d like what I’d hear). But from conversations with students it seemed there was the right amount of frustration to make solving the quests feel like an accomplishment. In all but one case, teams that started Mission (Geo)Impossible also finished it, or else ran out of time trying.

 

They submitted solutions at 5:30 in the morning, 11:00 in the evening, and sometimes during the lecture. They brought their quests to the lecture in case I dropped a hint. They came to visit me and said things like, “This is driving me crazy,” and “Why, Karla? Why?” I successfully (I think) suppressed a diabolical grin on most occasions. In fact, they put so much work into this that I felt bad about it from time to time. But it was an optional activity, I rationalized.

Wiggle room

When I started this I had no idea whatsoever whether students would be successful, but I did intend to supply a safety net if it was needed, and make sure their work was rewarded. This is my policy with everything I try in my courses.

In the first iteration things bogged down part way through the term, so to get students going again, I gave them an option: they could request one additional clue to a quest of their choice, or they could request clues for three quests, but I would pick which ones, and I wouldn’t tell them which I chose. (Heh heh.)

Naturally, the teams negotiated an arrangement whereby they sorted out which combination of options would work out to their collective advantage, and then they shared the information. At that point I was very glad I insisted on teams rather than letting individuals play, because as individuals they could conceivably ask for enough clues to specific quests to beat the system.

 

In the second iteration, I tried a new style of puzzles that turned out to be more difficult than I intended. By the end of the term, and after a massive effort, the teams were only about half way through. In that case I awarded the team with the most quests the 5% and 2.5% to the other team.

 

The third iteration

I will do this again, but with fewer puzzles (13- still a prime number), and with fewer difficult quests than last time. I will also give students some examples of quests from previous iterations. I’m hoping that will convince more students to get involved.

I won’t relax the rule about participating in teams. I tried that the second time around, and the individual participants either did not get started, or got hopelessly off on the wrong track. I do need to find a solution for students who want to participate, but aren’t comfortable approaching other students in the class who they don’t know.

But I will find a way to get as many students involved as possible, because the potential for this activity to give students confidence in their ability to approach difficult tasks- even seemingly impossible ones- is just too important.

Oh yes, and by the way…

I dare you.

dare

Deliverable: x + y + z

Categories: Challenges, Learning technologies, Teaching strategies | Tags: , , , , , | 2 Comments

A Guide to Arguing Against Man-Made Climate Change

If you must, then at least do it properly…

The debate about climate change ranges from people arguing that it isn’t happening at all, to those who argue that it is happening, but is entirely natural. The debate can become quite nasty, and part of the reason for this is not that people disagree, but that they disagree without following the rules of scientific discourse. I’m guessing in many cases this is accidental- a kind of cultural unawareness. It’s like making an otherwise innocuous hand gesture while on vacation in a foreign country, only to learn later that it was the rudest possible thing you could have done.

I’ve been annoyed by poor-quality discourse on this topic for some time, and written a few draft blog posts about it, but I’ll defer to the INTJ Teacher for a summary of the key issue (and the main reason I no longer read comment sections after news stories about climate change).

critical thinking2

So now that you know the problem in general terms, let’s talk specifics.

Dismissing the data

First of all, if you’re going to make claims that the data about climate change are problematic in some way, then you should know that there is no one data set. There are thousands of data sets worked on by thousands of people.

Some people seem to think that the whole matter rests on the “hockey stick” diagram of Michael Mann, Raymond Bradley, and Malcolm Hughes published in 1999. (You can download the paper as a pdf here.)

Hockey_stick_annotated

Annotated hockey-stick diagram

Briefly, this was an exercise in solving two kinds of problems: (1) taking temperature information from a variety of sources (e.g., tree rings) and turning it into something that could reasonably be plotted on the same diagram, and (2) figuring out how to take temperature measurements from all over the world and combine them into something representative of climate as a whole. The main reason it became controversial was that it showed a clear increase in temperature since 1850, and that result was not optimal for a certain subset of individuals with a disproportionate amount of political clout. There is a nice description of the debate about the diagram here, including arguments and counter-arguments, along with the relevant citations.

Those arguments are moot at this point, because the PAGES 2k consortium has compiled an enormous amount of data and done the whole project over again, getting essentially the same result (the green line in the figure above).  I can’t help but think that this was an in-your-face moment for Mann et al. (“In your face, Senator Inhofe!  In your face, Rep. Barton!  How d’ya like them proxies?!”)

Despite these results, if you still want to argue that the data are bad, you will need to do the following:

  • Specify which data set you are referring to. Usually this takes the form of a citation to the journal article where the data were first published.
  • Specify what is wrong with it. Was the equipment malfunctioning? Was the wrong thing being measured? Was there something in particular wrong with the analysis?
  • Assuming you are correct about that particular data set, explain why problems with that one data set can be used to dismiss conclusions from all of the other data sets. This will mean familiarizing yourself with the other data and the relevant arguments (although if you are arguing against them you would presumably have done this already).

Things that are not acceptable:

  • Attacks against the researchers. It is irrelevant whether the researchers are jerks, or whether you think they’ve been paid off. What matters are the data. If you can’t supply the necessary information, you have only conjecture.
  • Backing up your argument with someone else’s expert opinion (usually in the form of a url) if that opinion does not cover the points in the first list. It is discourteous to expect the person you are arguing with to hunt down the data backing someone else’s opinion in order to piece together your argument.
  • Arguing from the assumption that man-made climate change isn’t happening. If that’s your starting point, your arguments will tend to involve dismissing data not because there are concrete reasons to do so, but because based on your assumption, they can’t be true. This may be personally satisfying, and ring true to you, but it lacks intellectual integrity. If your argument is any good, that assumption won’t be necessary.

Climate models and uncertainty

It is a common misconception that uncertainty in the context of climate models means “we just don’t know.” Uncertainty is an actual number or envelope of values that everyone is expected to report. It describes the range of possibilities around a particular most likely outcome, and it can be very large or very small.

If you plan to dismiss model results on the basis of uncertainty, you will need to demonstrate that the uncertainty is too large to make the model useful. In cases where the envelope of uncertainty is greater than short-term variations, it may still be the case that long-term changes are much larger than the uncertainty.

Another misconception is that climate models are designed to show climate change in the same way that a baking soda and vinegar volcano is designed to demonstrate what a volcano is. Climate models take what we know of the physics and chemistry of the atmosphere, and add in information like how the winds blow and how the sun heats the Earth. Then we dump in a bunch of CO2 (mathematically speaking) and see what happens. In other words, models specify mechanisms not outcomes. They are actually the reverse of the baking soda and vinegar volcano.

The mathematical equations in a model must often be solved by approximation techniques (which are not at all ad hoc, despite how that sounds), and simplified in some ways so computers can actually complete the calculations in a reasonable timeframe. However, I would argue that they are the most transparent way possible to discuss how the climate might change. They involve putting all our cards on the table and showing our best possible understanding of what’s going on, because it’s got to be in writing (i.e., computer code).

The models aren’t top secret. If you really want to know what’s in them, someone will be able to point you to the code. If the someone is very accommodating (and they often are if you’re not being belligerent or simply trying to waste their time) they might explain some of it to you. But whether or not they do that effectively is irrelevant, because if you’re going to make claims about the models, it’s your obligation to make sure you know what you’re talking about.

If climate changes naturally, then none of the present change is man-made

This is a false dichotomy. No-one is arguing that nature isn’t involved in the usual ways. What they are saying is that the usual ways don’t do all of what we’re seeing now.

A simple way to think about it is as a shape-matching exercise. We would expect that if some trigger in nature is causing the climate to change, then a graph of the temperature change should resemble that of the triggering mechanism. The IPCC has done a nice job of making this comparison easy. In the image below I’ve marked up one of their figures from the Fifth Assessment Report in the way I usually do when I’m researching something. Panel a shows the temperature record (in black), and the panels below it show the changes in temperature attributable to different causes. In the upper right I’ve taken panels b through e and squashed them until they are on the same scale as panel a.

 

IPCC comparison

Annotated shape-matching exercise

A common argument against man-made climate change is to say the sunspot cycles are to blame. You can see the temperature variations that result from these cycles in panel b, and again at the top right. While there are small scale fluctuations in a, it is quite evident that the shape of the effects of sunspot cycles cannot account for the shape of the temperature record, either in terms of having an upward trend, or in terms of the timescale of the temperature change in a. Even if you added in volcanoes (panel c), and the El Niño/ La Niña cycles (panel d), you couldn’t make the trend that appears in a.

The only graph with a similar shape is the one that shows the temperature variations we would expect from adding CO2 and aerosols at the rate humans have been doing it (panel e). The red line in panel a is what you get if you add together b through e. It doesn’t have as much variation as the black line, meaning there are still other things at play, but it does capture the over-all trends.

You needn’t rely on someone else’s complex mathematical analysis to do this. This is something you can do with your own eyeballs and commonsense-o-meter. You may still be inclined to argue that all of these graphs are made up out of thin air, but if you have a look at the many different studies involved (you can do this by reading the chapter in the IPCC report and looking at the citations), you should realize that it’s a pretty lame argument to dismiss all of them out of hand.

But if you are undeterred by said lameness, at that point anyone interested in a serious conversation is going to decide that it isn’t worth their time debating with you, because you’ve already decided that any evidence contrary to your point of view must be wrong. Nothing they can tell you or show you will make a difference, ergo the conversation is pointless. You will appear to be impervious to reason which, incidentally, will be assumed to be the case for your opinions on other matters as well, whether that impression is deserved or not. (“It’s not worth arguing with Jim… if he has an idea in his head, he won’t change his mind no matter what you tell him. He would stand under a blue sky and tell you it’s pink.”)

Scientists are paid off to say climate change is man-made

This argument is quite irrelevant given that the data are what matter, but I think part of this argument might be related to another misconception, so I’m going to address it anyway. It is true that there are millions of dollars spent on climate research grants, but this isn’t pocket money for scientists. To get a grant researchers must justify the amount of money they are asking for in terms of things like lab expenses, necessary travel, and the like. Often their salaries don’t even come into the picture because they are paid by employers, not grants. It is more likely they will be paying grad students and post docs than themselves. When they do apply for funding that will cover their own salaries, that salary must be justifiable in the context of what others in similar positions get paid. In many cases this is a matter of public record, so you can go look up the numbers for yourself.

Most research being done on climate change is funded by government grants. A very few scientists have funding from private donors (though there isn’t nearly as much money as for petroleum-related research), but there is a big check on what influence those donors can have. Research must still go through review to be published. Even if biased research did make it through review, scientists on grants are highly incentivized to pick it apart because that can be an argument for additional grants to further their own research. Getting a grant is a matter of professional survival, so competition for research grants is intense.

In conclusion

There is only one way to make arguments against man-made climate change, and that is to address data and conclusions honestly and appropriately. It may feel good to add your two cents, but if your comments amount to ad hominem attacks or generalizations so broad as to be silly, you shouldn’t expect a good response. You’ve just made the equivalent of a very rude hand gesture to people who value thoughtful and well-informed discourse.

This all seems obvious to me, and I’ve struggled to understand people who argue in a way that I can only describe as dishonest.  But maybe psychology is a factor.  The climate-change deniers need only suggest that scientists are making things up. People don’t want to feel that they’ve been fooled, and most don’t have the background to easily check such claims, so it feels much safer to settle into uninformed skepticism.

Categories: Learning strategies, Science and such, Teaching strategies | Tags: , , , | 3 Comments

When good grades are bad information

Assignment grades versus exam gradesThis week I set out to test a hypothesis. In one of my distance education courses, I regularly get final exam scores that could pass for pant sizes. I have a few reasons to suspect that the exam itself is not to blame. First, it consists of multiple-choice questions that tend toward definitions, and general queries about “what,” rather than “why” or “how.” Second, the exam questions come directly from the learning objectives, so there are no surprises. Third, if the students did nothing but study their assignments thoroughly, they would have enough knowledge to score well above the long-term class average. My hypothesis is that students do poorly because the class is easy to put on the back burner. When the exam comes around, they find themselves cramming a term’s worth of learning into a few days.

Part of the reason the class is easy to ignore is that the assignments can be accomplished with a perfunctory browsing of the textbook. In my defense, there isn’t much I can do about fixing the assignments.  Someone above my pay grade would have to start the machinery of course designers, contracts, and printing services. In defense of the course author, I’m not entirely sure how to fix the assignments. If a student were so inclined (and some have been), the assignments could be effective learning tools.

Another problem is that students tend to paraphrase the right part of the textbook.  Even if I suspect that they don’t understand what they’ve written, I have few clues about what to remedy.  The final result is that students earn high grades on their assignments. If they place any weight at all on those numbers, I fear they seriously overestimate their learning, and seriously underestimate the amount of work they need to put into the class.

So, back to testing my hypothesis: I decided to compare students’ averages on assignments with their final exam scores. I reasoned that a systematic relationship would indicate that assignment scores reflected learning, and therefore the exam was just too difficult. (Because all of the questions came undisguised from the learning objectives, I eliminated the possibility that a lack of relationship would mean the exam didn’t actually test on the course material.)

I also went one step further, and compared the results from this course (let’s call it the paraphrasing course) with another where assignments required problem-solving, and would presumably be more effective as learning tools (let’s call that the problem-solving course).

My first impression is that the paraphrasing course results look like a shotgun blast, and the problem-solving course results look more systematic. An unsophisticated application of Excel’s line fitting suggests that 67% of the data for the problem-solving course can be explained if assignment grades reflect knowledge gained, while only 27% of the data from the paraphrasing course can be explained that way.

I’m hesitant to call the hypothesis confirmed yet, because the results don’t really pass the thumb test. In the thumb test you cover various data with your thumb to see if your first impression holds. For example, if you cover the lowest exam score in the paraphrasing course with your thumb, the distribution could look a little more systematic, albeit with a high standard deviation. If you cover the two lowest exam scores in the problem-solving course, the distribution looks a little less so. There is probably a statistically sound version of the thumb test (something that measures how much the fit depends on any particular point or set of points, and gives low scores if the fit is quite sensitive) but googling “thumb test” hasn’t turned it up yet.

From looking at the results, I’ve decided that I would consider a course to be wildly successful if the grades on a reasonably set exam were systematically higher than the grades on reasonably set assignments— it would mean that the students learned something from the errors they made on their assignments, and were able to build on that knowledge.

 

Categories: Assessment, Distance education and e-learning | Tags: , , , , , | 2 Comments

Blog at WordPress.com.