Monthly Archives: April 2014

When good grades are bad information

Assignment grades versus exam gradesThis week I set out to test a hypothesis. In one of my distance education courses, I regularly get final exam scores that could pass for pant sizes. I have a few reasons to suspect that the exam itself is not to blame. First, it consists of multiple-choice questions that tend toward definitions, and general queries about “what,” rather than “why” or “how.” Second, the exam questions come directly from the learning objectives, so there are no surprises. Third, if the students did nothing but study their assignments thoroughly, they would have enough knowledge to score well above the long-term class average. My hypothesis is that students do poorly because the class is easy to put on the back burner. When the exam comes around, they find themselves cramming a term’s worth of learning into a few days.

Part of the reason the class is easy to ignore is that the assignments can be accomplished with a perfunctory browsing of the textbook. In my defense, there isn’t much I can do about fixing the assignments.  Someone above my pay grade would have to start the machinery of course designers, contracts, and printing services. In defense of the course author, I’m not entirely sure how to fix the assignments. If a student were so inclined (and some have been), the assignments could be effective learning tools.

Another problem is that students tend to paraphrase the right part of the textbook.  Even if I suspect that they don’t understand what they’ve written, I have few clues about what to remedy.  The final result is that students earn high grades on their assignments. If they place any weight at all on those numbers, I fear they seriously overestimate their learning, and seriously underestimate the amount of work they need to put into the class.

So, back to testing my hypothesis: I decided to compare students’ averages on assignments with their final exam scores. I reasoned that a systematic relationship would indicate that assignment scores reflected learning, and therefore the exam was just too difficult. (Because all of the questions came undisguised from the learning objectives, I eliminated the possibility that a lack of relationship would mean the exam didn’t actually test on the course material.)

I also went one step further, and compared the results from this course (let’s call it the paraphrasing course) with another where assignments required problem-solving, and would presumably be more effective as learning tools (let’s call that the problem-solving course).

My first impression is that the paraphrasing course results look like a shotgun blast, and the problem-solving course results look more systematic. An unsophisticated application of Excel’s line fitting suggests that 67% of the data for the problem-solving course can be explained if assignment grades reflect knowledge gained, while only 27% of the data from the paraphrasing course can be explained that way.

I’m hesitant to call the hypothesis confirmed yet, because the results don’t really pass the thumb test. In the thumb test you cover various data with your thumb to see if your first impression holds. For example, if you cover the lowest exam score in the paraphrasing course with your thumb, the distribution could look a little more systematic, albeit with a high standard deviation. If you cover the two lowest exam scores in the problem-solving course, the distribution looks a little less so. There is probably a statistically sound version of the thumb test (something that measures how much the fit depends on any particular point or set of points, and gives low scores if the fit is quite sensitive) but googling “thumb test” hasn’t turned it up yet.

From looking at the results, I’ve decided that I would consider a course to be wildly successful if the grades on a reasonably set exam were systematically higher than the grades on reasonably set assignments— it would mean that the students learned something from the errors they made on their assignments, and were able to build on that knowledge.

 

Categories: Assessment, Distance education and e-learning | Tags: , , , , , | 2 Comments

New digs for Petragogy

Ruby digs another holeThis is my first post at Petragogy’s new home.  In its first incarnation, my blog was hosted by the University of Saskatchewan. It is a free service offered to University of Saskatchewan faculty, staff, and students, and that’s the problem. As a sessional lecturer, I don’t know from one term to the next whether I’ll have a job with the U of S, which means I could lose access to my blog at any time. Because I have plans for Petragogy, it doesn’t make sense for me to have the U of S continue to host it.

Moving a blog is remarkably like moving to a new apartment, although the packing up and unpacking are much easier.   (If only one could “export” and “import” boxes of belongings so effortlessly.) One closes the door on the old apartment for the last time with the faintest sense of loss for the comfort and familiarity of the old place, while at the same time feeling relieved to be out from under the landlord’s control.

As with moving to a new apartment, the new space needs decorating before it looks like home. I’ve negotiated a little more flexibility with the new landlord, so I splashed some paint around and put up new wallpaper. I also expect to add a few new appliances.

Once I get settled in, my plan is to add some pages. The course management systems I use have limitations, and I plan to use my pages to work around those limitations. At the U of S, I am able to post resources on course websites on Blackboard, but a new course website must be set up each time a course is offered. At Athabasca University, the course website remains the same, but the only way I can add to the website is by posting announcements. That format makes it very difficult for students to see at a glance what resources are available. Also, announcements have dates attached to them, so I have to tell each new student that the old announcements are still relevant.

With Petragogy at its new home, I have the stability, autonomy, and tools that I lack elsewhere.

I will be returning the keys to the old landlord in the upcoming week.

Categories: Learning technologies | Leave a comment

The open textbook arrives

Textbook coverDownload the pdf         Download the epub         Download for Kindle

The title for this post might be a little premature, but the fact is that I am now viewing a proof-of-concept version of my open textbook on my Kindle.

There were two main questions that I considered as I worked through this experiment.  One question was technology related: what is the best way to build and distribute the textbook?  The other question was design related: what content should go into the book, and how should it be presented?

The technological question was the easier of the two to answer.  I decided to take Booktype for a test drive.  This tool is free to download and use… if you have a server.  Otherwise it is $16 a month.  Booktype has a nice interface, and I think its two main strengths are the ease with which it can convert a document to different electronic formats (especially those for e-readers), and the tools for collaboration.  I published my book to the following formats: pdf, mobi (for Kindle), and epub (good for just about any reading device other than a Kindle).

The best results came from the epub format when viewed from Mac’s iBook reader.  The pdf didn’t work as well due to technical difficulties, but the problems weren’t anything I couldn’t fix if I generated my own pdf files directly out of Word.  Those files could then be distributed via Google Drive.  I’d need to do some “market research” to determine whether it would make sense to stay with pdfs (good for Mac devices, PCs, and Kindles), or whether there would be a lot of demand for the epub format.

The design question was more difficult to answer.  I experimented with the idea of using course design principles.  I came to the conclusion that this is probably the angle the publishers are using—for example, every introductory geology textbook on my shelf starts each chapter with a list of learning objectives, and ends each chapter with discussion questions.  I can do that too, but I can’t compete with the publishers’ ability to design and incorporate multimedia learning tools, or online self-assessment tools.  Here’s the thing, though—if the course is merely a textbook wrapper, then these things matter.  On the other hand, if the course is well designed then maybe it is ok for the textbook to be just a textbook.  Whether my course is well designed or not is another matter, but given that I will have to teach it, I think my time is better spent working on course design than on writing algorithms for dynamic assessment tools.

I will keep working on my textbook.  I’m going to focus on what I need it to be, and fill the gaps left by the other textbooks available to me.  Despite all of the bells and whistles that come with textbooks these days, there are indeed gaps.  It may be a while before I can rely entirely on my own book, but each bit of progress will improve what I can offer to my students.

Categories: Learning technologies, Textbooks | Tags: , , , | 1 Comment

Wreck it good

nasty equationThis week I graded an exceptionally well-written exam.  The student used exam-writing and study strategies that I’ve found to be effective in my own experience.  This got me reflecting on my time as a student and I remembered the one thing that helped me more than any other skill or strategy that I developed: wreck it good.

Proclaimed in the same spirit as “git’er done!” (and therefore exempt from the usual grammatical rules), “wreck it good” was my license to fail.  Not only that, it made failure an imperative, which turned out to be a very good thing.

Here’s the scenario: I was taking a course in numerical modeling as part of my doctorate at Penn State.  The course included assignments that required writing computer code to simulate a variety of natural systems and processes.  I’d had some experience with programming, but the programming environment was new to me, and the application was also new.  Coding can be frustrating and challenging.  From time to time, my code produced such bizarre results that I had to remind myself that the computer was doing what I told it to, and not manifesting malicious intent.

As stressful as the course was (my husband claims it took five years off my lifespan), I look back on it with only positive feelings as a result of having given myself permission to fail.  It started out as a matter of pride.  I didn’t want to ask for help with my code only to hear “Did you try [insert obvious course of action that didn’t occur to me]?”  I resolved instead that I would try everything I could think of and make a complete and utter disaster of my code if necessary—I would wreck it good.  Then I could ask for help with the confidence that either the computer was broken, or the task was impossible when viewed from every conceivable angle by a normal human being.  I was nothing if not thorough.

A strange thing happened on the way to wrecking it good… I actually began to have fun with troubleshooting my code.  There was no risk involved in failure, because I could ask for help at any time.  That meant troubleshooting was more about exploring possibilities than fixing problems… and oddly enough, despite my best efforts, I never did wreck it good enough to need my instructor’s help.  It was very empowering to find out over and over again that however ugly and impossible the problem looked initially, I could handle it.  Bring it on, partial differential equation… cause I’m going to wreck you good!

Wrecking it good isn’t just for computer programming.  It works great for doing battle with math problems, or for posing challenging study questions to diagnose knowledge gaps.  It makes sense to try to fail—it is a full frontal attack on your learning challenges, and they won’t stand a chance.

Categories: Learning strategies | Tags: , , | 2 Comments

Open textbooks and cognitive load

While sketching out a plan for my open textbook, I’ve hit upon a design question:  how “printable” should it be?  A “printable” open textbook would contrast with one that is more akin to a series of webpages: if it were heavy on hyperlinks and multimedia then it would lose functionality when printed, because extra steps would be required for the user to access the online resources.

On one hand, being printable might seem to be about accommodating preferences—those arising from a learning history with print materials.  But what if there is a more basic reason for these preferences?  What if the act of learning is fundamentally different with electronic course materials?  Could that difference make it inherently more difficult to learn from electronic materials?

I think learning is different with electronic materials, and I think it is harder.  To explain why, I have to make a big leap from my comfy geology headspace into the alien terrain of cognitive psychology.  Please do excuse me if I land awkwardly…

The difference between Y, P, G, I, A, G, N, K, B, and PIGGY BANK

The concept of cognitive load describes a sort of mental balance sheet in which learning is associated with a cost, and the learner has only so much to spend in her mental piggy bank.  The learner will spend some of her mental budget on the learning task itself.  Some of the budget will be spent on organizing the knowledge into a meaningful whole.  Both of these expenditures are good investments for the learner.

But there is also a kind of learning “overhead,” the extraneous cognitive load.  It is the cost of setting up the operation in the first place, and the more the learner spends on overhead, the less she can spend on accumulating and organizing knowledge.

When comparing electronic and print materials, the expenditure on learning tasks and organizing can be identical, but the overhead is different.  There is more overhead associated with electronic materials, and that leaves less of the budget for learning and organizing.  Some of the overhead associated with electronic materials will diminish over time.  For example, if the learner must first figure out how to use a computer, that would count as overhead.  Over time, however, using a computer might become second nature, and the related overhead would decrease.

What won’t change is the way learners interact with electronic media.  For example, consider how a learner constructs a mental picture of where the information is that she is after.  In a book, this location is a physical thing within a linear arrangement—you flip ahead, or flip backward.  If your thumb is already in the right spot, then you just go there without even thinking about it.  In electronic learning materials, you might scroll down a page, but there might also be links, videos to watch, recordings to hear, and other pages to cross-reference… the structure is branching, and there is no convenient place to stick your thumb.

An open textbook must be available electronically if it is to solve the problems of cost, updating, and distribution inherent in the textbooks offered by publishers.  The challenge is finding a learning-friendly balance between what can be included and what should be.

Categories: Learning technologies, Textbooks | Tags: , , , , | 6 Comments

Blog at WordPress.com.