Posts Tagged With: learning technologies

Clear As Fine-Grained Sediment Mixed With Water: A Discussion Forum

This week I’m presenting a poster at the Earth Educators’ Rendezvous. The poster is about a discussion forum activity that I do with my introductory physical geology students at St. Peter’s College. I’ve turned my poster into a blog post just in case anyone is thinking about trying a similar activity and would like to refer back to it. Alternatively, folks may simply want to confirm that some nut at an academic meeting designed a poster consisting largely of cartoons. Either way, here it is.Intro

Why

How

You can download a copy of the handout for this activity, including the rubric, here.

Examples.png

Strategies

This is a great resource from the University of Wisconsin-Stout for explaining online etiquette to students.

summary

Categories: Assessment, Teaching strategies | Tags: , , , , , | Leave a comment

Online Courses and The Problem That No-One Is Talking About

There are two kinds of online courses: those which are taught, and those which are facilitated. The distinction does not apply to the task of interacting with students. I’ve been both “teacher” and “facilitator,” and it’s exactly the same job from that perspective. The difference is one of autonomy, and it is a big difference.

The Gwenna Moss Centre is about to run another offering of their Introduction to Teaching Online course. Although I am a co-facilitator for this course, I would describe it as a course which is taught rather than facilitated. My co-co-facilitator and I discuss the course as it is running, and make adjustments on the fly when necessary. We take note of what worked and what didn’t, look at participants’ evaluations, and then modify the course as necessary for the next offering. Not only do we have the autonomy to make the necessary changes, it is expected that we will.

In Intro to Teaching Online, we assume that the participants will also be able to teach their online courses- that they will make pedagogical and logistical choices to respond to their students’ needs, and to make the course run as smoothly as possible. Also, that they will have the ability to revise as necessary and try new things. That’s how you teach an online course.

When you facilitate an online course, while you might take on the task of assisting students and grading their papers, what you can do beyond that is tightly restricted by a delivery model over which you have very little control. How little control will vary, but most likely it will be difficult or impossible to make substantive changes to what is taught, or how it is taught. Even if you designed the course in the first place, that “you” and facilitator you are completely different people as far as control over the course goes, and designer you lost any input as soon as the design contract was up.

If you are lucky enough to be able to request changes, the process is rather like having completed a painting, then being told you aren’t allowed to touch it anymore. If you want something to change, you must fill out a form describing in detail where the paint should go and how to move the brush. Someone more qualified than you will make the change. They might send a note back to you saying that they plan to improve your painting of a cow by adding spots. You must then explain at length that it is in fact a dog, and should not have spots. When the painting is finally modified, the dog is the wrong shade of brown. You decide it is best to not request modifications to your paintings in future.

Why does this matter? I don’t care how good you are- you never get a course exactly right the first time. If there aren’t any outright problems, then it soon becomes apparent where improvements can be made. Facilitator you gets to see the problems or areas for improvement, but must be content with grading papers and answering questions. If facilitator you is like facilitator me, this will drive you nuts. If facilitator you is subject to the same kinds of course evaluations as someone who can teach their course, and make it the best it can be, then this is not only unfair, but professionally dangerous.

While course quality is affected by this- especially if no-one sees a need to consult with facilitator you about how the course is going, or there are no mechanisms for facilitator you to communicate issues and be taken seriously- there is a bigger problem: the very integrity of the course.

At one time distance education was mostly intended to serve those who could not go to a brick-and-mortar institution for one reason or another. Maybe they had a family or a full-time job and couldn’t leave to go to school. Maybe they just couldn’t afford to move. Now things are different. While I don’t have any hard numbers, from what I can tell, at least 70% of my students are already taking classes at a brick-and-mortar school. They take an online class because they can fit it into their schedule better than one on campus, or it isn’t offered on campus at a time they need it, or they’re trying to get ahead/ complete their degrees over the summer.

What this means for the big picture is that students are far more likely to communicate with each other about the course than in the past. It might be two students who take the course together, or it could be someone who took it previously sharing information with someone currently enrolled. In the case that is causing me problems right now, a substantial number of students from one department at one school take the online course to fill a requirement. This is a facilitated course, so perhaps you can guess where this is going.

The students talk to each other. Some of it might be innocent enough, but some of it involves passing on assignments that I’ve graded to the next group of students who take the course. The course has not been updated substantively in some time, so the same assignments and exams still apply.

The problem has become ridiculous of late, with students submitting near-perfect assignments, all exactly alike plus or minus a few careless errors, and within record time. They get things right that no-one ever gets right. Clearly they are working together, but they are also referring to older assignments. I know this for certain for a few reasons: First, the correct answer will frequently appear after incomplete or even nonsensical work. They submit solutions with the answer that would have resulted if a typo, long since removed, was still in the question. They also plagiarize my comments from old assignments, sometimes reproducing them verbatim.

This course has a must-pass stipulation on the final exam. Normally that would be some comfort, because students who haven’t learned anything on the assignments would fail the exams. I’ve seen students with 95%, 99%, and 100% on assignments unable to break 20% on the final. (The exam isn’t that hard.) But over the past few months it has become apparent that the content of the exam has been shared. If not an actual copy, then a very good description of what it contains is in circulation. Exam grades have gone up, and students are regularly answering questions correctly which were rarely answered correctly in the past.

Ideally, if so many students who know each other are taking the course, the assignments should change frequently. In our hyper-connected world, it is almost certain that this kind of communication between students will happen. I even know of a homework-sharing website that has some of the solutions posted. The problem is that in order to change this, someone has to keep on top of the course full-time, and have the autonomy to make the necessary changes. The main consideration should not be the logistics of altering course materials. There’s no excuse for that when the relevant materials are or can be delivered online, and everyone and their dog knows how to upload a file to an LMS.

Nevertheless, the issue is that facilitators cannot be empowered in this way without disrupting the underlying structure of course delivery. Even more problematic is a culture amongst those who do run things- those who are not subject-matter experts but who handle the day-to-day operations- which views facilitators as incompetent, and unable to handle this responsibility. Not long ago I was handed an in-house guide to designing distance education courses. It warned readers at the outset that most faculty would be uncooperative and not understand how a distance education course should run. I felt ill, the way you would feel if you overheard your co-workers complaining about how useless you were. As I recycle that book I will contemplate with irony the damage this attitude has caused to distance education, and wonder if maybe I should take a chance and start the dog-washing business I’ve been thinking about.

There are many reasons to disempower facilitators, not the least of which is the cost savings from having them as casual workers instead of full-time ones. So here’s where I’m going to get in trouble for this post (if I haven’t already): if your concern is the bottom line, what happens when the ease with which students can cheat in your course makes other schools, employers, professional certification organizations, etc., decide that credit for your course is no longer meaningful? Even if cheating is less of a risk, what if word gets around that the course is hopelessly outdated or has problems? You don’t get enrollment, that’s what. And the people who communicate this aren’t going to be disgruntled facilitators. I’m the least of your worries. You need to worry about the students themselves who joke openly about cheating, and how little can be done about it, or who are discovered to lack skills or to have learning that is outdated.

There is a fundamental disconnect between what schools view as the appropriate way to structure a distance education program, and what actually works on the ground, when you’re expecting learning to happen. One involves online teaching and the other does not. There is a cultural gulf between those who have the power to do something about it, and those who can only look on in frustration. There are a lot of dogs to wash, but with most of them you have to spell out B-A-T-H rather than say the word, or they run off. A waterproof apron is useful, but not foolproof. You’ll need lots of towels.

Categories: Assessment, Challenges, Distance education and e-learning, Learning technologies, The business of education | Tags: , , , , , , , , , | Leave a comment

Dear Ed Tech: This Is What You Don’t Understand About Higher Education

I am the kind of tired that makes you feel hollow inside, so maybe this isn’t the best time to be writing this, but then again, maybe it is. I just got back from my Monday-Tuesday teaching overnighter out of town. I’m a hired gun in the world of higher education- sometimes we’re called adjunct faculty, sometimes sessional lecturers, and a number of other terms that are beyond my ability to recall at the moment. But you know who we are.

The problem is that being able to learn about educational technologies is really a luxury for my lot. I’ve been able to take many free courses which I’ve enjoyed very much, but I was only able to take them because I could afford to not fill that time with paid work. Full-time faculty on campus who opt to attend a course are doing so during the work day, but hired guns do it on their own time. Many of my colleagues simply wouldn’t be able to take the time- I’m thinking of you, Elaine, with your 8 courses this term in at least three different communities. So the first thing you need to know, Ed Tech, is that a substantial number of the people teaching courses at universities are hired guns like me, and many of those are on the razor’s edge of being able to support their teaching habits.

Part of being a hired gun is not having job security. You should care about this, Ed Tech, because the many wonderful tools you offer require a lot of work up-front. It’s a big decision whether or not to use a technology when learning it and preparing materials happens on your own time. It’s an even bigger decision when access to a tool depends on your employment status, as it often does with institutional subscriptions to software.

My blog, for example, started out on a university WordPress service, but after the jarring experience of having my computing access cut off between contracts, and facing the loss of the materials I created, I moved it and absorbed the costs associated with making it ad-free.

The same university is working on updating their in-class response system. I’m using one now- Poll Everywhere, which also happens to be something I can afford out-of-pocket- and the chance that I would adopt the system they choose is zero. It doesn’t matter how good the system is. What matters is that it takes a lot of time to set up questions and to embed them into presentations. Is it worth spending the time if I only get to use those questions once, or, assuming I’m teaching a similar class elsewhere, am unable to access them? This more or less guarantees that whatever system the university chooses will be utilized far less than they would like.

I came face to face with this issue more recently when discussing a home for the open textbook adaptation I’m working on. First of all, I’ve spent 131 hours on this adaptation so far, according to the timer I use to track my various ill-advised activities. That doesn’t include the 65 hours I spent writing a chapter for the original version of the book (for which, I must add, I was compensated- something I appreciated as an acknowledgement of my work as much as for the income.).

My free Pressbooks account didn’t have enough space for the media library, so I upgraded at my own expense. I then learned that the university is setting up its own version of Pressbooks, but faced with the possibility of losing access to what now seems like a ridiculous amount of work, I would never consider using their account to work on my textbook. I would also be nervous about having my students use a version hosted on the university’s system because I’m not clear on whether I would have access to edit it once it got put there. (I have no idea how authors of print materials aren’t driven nuts by being unable to edit at will.)

In my present state of near-faceplant exhaustion, it appears that I’ve made a great many poor life-choices. I can justify this in my better moments as things that are important to do for my students, but on days like today, all I can think of is why oh why am I killing myself with this?

Ed Tech, you need to realize that many of the people teaching in higher education are not in a position to be as frivolous with their time as I have been. In the push to get instructors to adopt various kinds of educational technology, it isn’t just a matter of convincing them that it’s good for students. They very likely know that already. The challenge is convincing them that they should commit to a technology in spite of the personal and financial burden, not to mention being treated like the education version of a paper plate (it works, it’s cheap, it’s disposable, there are lots more where it came from) by the schools that would benefit from their labour.

The commitment you’re asking for isn’t the same as it would be for full-time faculty, and I don’t think you realize how frustrating- even insulting- it is when you discuss the problem of adoption in terms of instructors being resistant to change, too lazy to change, or just not getting it. Especially when you yourselves are comfortably ensconced in a full-time position. For hired guns like me, the only compensation is warm fuzzies. When you’re a dead-inside kind of tired, warm fuzzies are entirely inadequate.

Categories: Challenges, Learning technologies, Textbooks | Tags: , , , , , , , , , , | Leave a comment

Time: The final frontier

Timefleet Academy logo: a winged hourglass made of ammonites

A logo begging for a t-shirt

Here it is: the final incarnation of my design project for Design and Development of Educational Technology– the Timefleet Academy. It’s a tool to assist undergraduate students of historical geology with remembering events in Earth history, and how those events fit into the Geological Time Scale. Much of their work consists of memorizing a long list of complicated happenings. While memorizing is not exactly at the top of Bloom’s Taxonomy (it’s exactly at the bottom, in fact), it is necessary. One could approach this task by reading the textbook over and over, and hoping something will stick, but I think there’s a better way.

I envision a tool with three key features:

  • A timeline that incorporates the Geological Time Scale, and “zooms” to show events that occur over widely varying timescales
  • The ability to add events from a pre-existing library onto a custom timeline
  • Assessments to help students focus their efforts effectively

Here’s an introduction to the problem, and a sketch of my solution. If your sensors start to detect something familiar about this enterprise then you’re as much of a nerd as I am.

Timefleet Academy is based on the constructionist idea that building is good for learning. Making a representation of something (in this case, Earth history) is a way of distilling its essential features. That means analyzing what those features are, how they are related, and expressing them explicitly. Ultimately this translates to the intuitive notion that it is best to approach a complex topic by breaking it into small digestible pieces.

Geological Time Scale

This is what you get to memorize.

As challenging as the Geological Time Scale is to memorize, it does lend itself to “chunking” because the Time Scale comes already subdivided. Even better, those subdivisions are designed to reflect meaningful stages (and therefore meaningful groupings of events) in Earth history.

There is an official convention regarding the colours in the Geological Time Scale (so no, it wasn’t my choice to put red, fuchsia, and salmon next to each other), and I’ve used it on the interface for two reasons. One is that it’s employed on diagrams and geological maps, so students might as well become familiar with it. The other is that students can take advantage of colour association as a memory tool.

Assessments

Assessments are a key difference between Timefleet Academy and other “zoomable” timelines that already exist. The assessments would come in two forms.

1. Self assessment checklists

These allow users to document their progress through the list of resources attached to individual events. This might seem like a trivial housekeeping matter, but mentally constructing a map of what resources have been used costs cognitive capital. Answering the question “Have I been here already?” has a non-zero cognitive load, and one that doesn’t move the user toward the goal of learning historical geology.

2. Drag-and-drop drills

The second kind of assessment involves drill-type exercises where users drag and drop objects representing events, geological time periods, and dates, to place them in the right order. The algorithm governing how drills are set would take into account the following:

  • The user’s previous errors: It would allow for more practice in those areas.
  • Changes in the user’s skill level: It would adjust by making tasks more or less challenging. For example, the difficulty level could be increased by going from arranging events in chronological order to arranging them chronologically and situating them in the correct spots on the Geological Time Scale. Difficulty could also be increased by placing time limits on the exercise, requiring that the user apply acquired knowledge rather than looking up the information.
  • The context of events: If drills tend to focus on the same group of events, the result could be overly contextualized knowledge. In other words, if the student were repeatedly drilled on the order of events A, B, and C separately from the order of events D, E, and F, and were then asked to put A, B, and E in the right order, there could be a problem.

The feedback from drills would consist of correct answers and errors being indicated at the end of each exercise, and a marker placed on the timeline to indicate where (when) errors have occurred. Students would earn points toward a promotion within Timefleet Academy for completing drills, and for correct answers.

Who wouldn’t want a cool new uniform?

How do you know if it works?

1. Did learning outcomes improve?

This could be tested by comparing the performance of a group of students who used the tool to that of a control group who didn’t. Performance measures could be results from a multiple choice exam. They could also be scores derived from an interview with each student, where he or she is asked questions to gauge not only how well events are recalled, but also whether he or she can explain the larger context of an event, including causal relationships. It would be interesting to compare exam and interview scores for students within each group to see how closely the results of a recall test track the results of a test focused on understanding.

For the group of students who have access to the tool, it would be important to have a measure of how they used it, and how often. For example, did they use it once and lose interest? Did they use it for organizing events but not do drills? Or did they work at it regularly, adding events and testing themselves throughout? Without this information, it would be difficult to know how to interpret differences (or a lack of differences) in performance between the two groups.

 2. Do they want to use it?

This is an important indicator of whether students perceive that the tool is helpful, but also of their experience interacting with it. Students could be surveyed about which parts of the tool were useful and which weren’t, and asked for feedback about what changes would make it better. (The option to print out parts of the timeline, maybe?) They could be asked specific questions about aspects of the interface, such as whether their drill results were displayed effectively, whether the controls were easy to use, etc. It might be useful to ask them if they would use the tool again, either in its current form, or if it were redesigned to take into account their feedback.

Timefleet in the bigger picture

Writing a test

All set to pass the test of time

Timefleet Academy is ostensibly a tool to aid in memorizing the details of Earth history, but it actually does something more than that. It introduces students to a systematic way of learning- by identifying key features within an ocean of details, organizing those features, and then testing their knowledge.

The point system rewards students for testing their knowledge regardless of whether they get all of the answers right. The message is twofold: testing one’s knowledge is valuable because it provides information about what to do next; and testing one’s knowledge counts as progress toward a goal even if you don’t get the right answers every time. Maybe it’s threefold: if you do enough tests, eventually you get a cape, and a shirt with stars on it.

Categories: Assessment, Learning strategies, Learning technologies | Tags: , , , , | 2 Comments

Building assessments into a timeline tool for historical geology

In my last post I wrote about the challenges faced by undergraduate students in introductory historical geology. They are required to know an overwhelming breadth and depth of information about the history of the Earth, from 4.5 billion years ago to present. They must learn not only what events occurred, but also the name of the interval of the Geological Time Scale in which they occurred. This is a very difficult task! The Geological Time Scale itself is a challenge to memorize, and the events that fit on it often involve processes, locations, and organisms that students have never heard of. If you want to see a case of cognitive overload, just talk to a historical geology student.

My proposed solution was a scalable timeline. A regular old timeline is helpful for organizing events in chronological order, and it could be modified to include the divisions of the Geological Time Scale. However, a regular old timeline is simply not up to the task of displaying the relevant timescales of geological events, which vary over at least six orders of magnitude. It is also not up to the job of displaying the sheer number of events that students must know about. A scalable timeline would solve those problems by allowing students to zoom in and out to view different timescales, and by changing which events are shown depending on the scale. It would work just like Google Maps, where the type and amount of geographic information that is displayed depends on the map scale.

Doesn’t that exist already?

My first round of Google searches didn’t turn anything up, but more recently round two hit paydirt… sort of. Timeglider is a tool for making “zoomable” timelines, and allows the user to imbed media. It also has the catch phrase “It’s like Google Maps but for time,” which made me wonder if my last post was re-inventing the wheel.

ChronoZoom was designed with Big History in mind, which is consistent with the range of timescales that I would need. I experimented with this tool a little, and discovered that users can build timelines by adding exhibits, which appear as nodes on the timeline. Users can zoom in on an exhibit and access images, videos, etc.

If I had to choose, I’d use ChronoZoom because it’s free, and because students could create their own timelines and incorporate timelines or exhibits that I’ve made. Both Timeglider and ChronoZoom would help students organize information, and ChronoZoom already has a Geological Time Scale, but there are still features missing. One of those features is adaptive formative assessments that are responsive to students’ choices about what is important to learn.

Learning goals

There is a larger narrative in geological history, involving intricate feedbacks and cause-and-effect relationships, but very little of that richness is apparent until students have done a lot of memorization. My timeline tool would assist students in the following learning goals:

  • Memorize the Geological Time Scale and the dates of key event boundaries.
  • Memorize key events in Earth history.
  • Place individual geological events in the larger context of Earth history.

These learning goals fit right at the bottom of Bloom’s Taxonomy, but that doesn’t mean they aren’t important to accomplish. Students can’t move on to understanding why things happened without first having a good feeling for the events that took place. It’s like taking a photo with the lens cap on- you just don’t get the picture.

And why assessments?

This tool is intended to help students organize and visualize the information they must remember, but they still have to practice remembering it in order for it to stick. Formative assessments would give students that practice, and students could use the feedback from those assessments to gauge their knowledge and direct their study to the greatest advantage.

How it would work

The assessments would address events on a timeline that the students construct for themselves (My Timeline) by selecting from many hundreds of events on a Master Timeline. The figure below is a mock-up of what My Timeline would look like when the scale is limited to a relatively narrow 140 million year window. When students select events, related resources (videos, images, etc.) would also become accessible through My Timeline.

Timeline interface

A mock-up of My Timeline. A and B are pop-up windows designed to show students which resources they have used. C is access to practice exercises, and D is how the tool would show students where they need more work.

Students would benefit from two kinds of assessments:

Completion checklists and charts

The problem with having abundant resources is keeping track of which ones you’ve already looked at. Checklists and charts would show students which resources they have used. A mouse-over of a particular event would pop up a small window (A in the image above) with the date (or range of dates) of the event and a pie chart with sections representing the number of resources that are available for that event. A mouse-over on the pie chart would pop up a hyperlinked list of those resources (B). Students would choose whether to check off a particular resource once they are satisfied that they have what they need from it, or perhaps flag it if they find it especially helpful. If a resource is relevant for more than one event, and shows up on multiple checklists, then checks and flags would appear for all instances.

Drag-and-drop exercises

Some of my students construct elaborate sets of flashcards so they can arrange events or geological time intervals spatially. Why not save them the trouble of making flashcards?

Students could opt to practice remembering by visiting the Timefleet Academy (C). They would do exercises such as:

  • Dragging coloured blocks labeled with Geological Time Scale divisions to put them in the right order
  • Dragging events to either put them in the correct chronological order (lower difficulty) or to position them in the correct location on the timeline (higher difficulty)
  • Dragging dates from a bank of options onto the Geological Time Scale or onto specific events (very difficult)

Upon completion of each of the drag-and-drop exercise, students would see which parts of their responses were correct. Problem areas (for example, a geological time period in the wrong order) would be marked on My Timeline with a white outline (D) so students could review those events in the appropriate context. White outlines could be cleared directly by the student, or else by successfully completing Timefleet Academy exercises with those components.

Drag-and-drop exercises would include some randomly selected content, as well as items that the student has had difficulty with in the past. The difficulty of the exercises could be scaled to respond to increasing skill, either by varying the type of drag-and-drop task, or by placing time limits on the exercise. Because a student could become very familiar with one stretch of geologic time without knowing others very well, the tool would have to detect a change in skill level and respond accordingly.

A bit of motivation

Students would earn points for doing Timefleet Academy exercises. To reward persistence, they would earn points for completing the exercises, in addition to points for correct responses. Points would accumulate toward a progression through Timefleet Academy ranks, beginning with Time Cadet, and culminating in Time Overlord (and who wouldn’t want to be a Time Overlord?). Progressive ranks could be illustrated with an avatar that changes appearance, or a badging system. As much as I’d like to show you some avatars and badges, I am flat out of creativity, so I will leave it to your imagination for now.

Categories: Assessment, Learning strategies, Learning technologies | Tags: , , , , | Leave a comment

How to make sense of historical geology

Imagine that someone changed the clock on you, breaking the day into irregular blocks, and giving the blocks names and symbols in no systematic way. Now imagine that you are given a list of events to memorize- activities of people you don’t know at places with which you are unfamiliar:

Fantasy clock with radiolarians

No, they’re not aliens. They’re radiolarians. And aren’t you glad you don’t use this to tell time?

During the Early Fizz, Pierre Bezukhov and Cthulu squared off on Callisto. By Middle to Late Fizz, Pierre Bezukhov had the advantage, so Cthulu migrated to Kore. At the Fizz-Zoot boundary, Dmitry Dokhturov and a shoggoth appeared on Europa, but both went extinct by the end of the Zoot, likely due to a lack of habitat. Beginning in the Flap, Callisto, Europa, and Taygete began a collision that culminated in their merger by mid-Flap. Land bridges that formed allowed the migration of Anna Mikhaylovna Drubetskaya from her original habitat on Taygete, leaving a niche open, and allowing Nyarlathotep to diversify.

Now stuff that in your head so I can ask you about it on an exam, in no particular order.

If you are familiar with the moons of Jupiter, the characters of War and Peace, or the fiction of H. P. Lovecraft, then you might have a chance at remembering some of the names, but their relationships would probably be new to you. This is the scenario faced by students taking introductory historical geology.

The clock they have to work with is the Geological Time Scale– a way geologists have of carving up Earth’s 4.5 billion years of history into chunks that reflect key events or phases. The chunks are not the same size, and there are chunks within chunks. The exact dates when each chunk (or sub-chunk) starts and ends are moving every few years as geologists get better information about the timing of key events that define the boundaries. There is no system- you just have to memorize it, and you’d better do it in a hurry, because everything you will learn about the Earth’s history will be described in terms of the Geological Time Scale.

Aside from learning this new clock, students must also learn the names of extinct and extant organisms, the names and histories of various continents and oceans, extant or otherwise, and the geological processes that have influenced those organisms, continents, and oceans. Did I mention that students usually have to learn a range of dates, because we can’t be sure of the actual date, and/or because the event happened over millions or hundreds of millions of years? Oh, and one more thing- the dates of different events will overlap to varying degrees, and the story lines will be almost impossible to disentangle from each other. But don’t worry- the exam is multiple choice.

The obvious way to organize all of this information is a timeline, and most textbooks have a version of the Geological Time Scale with some dates and key events marked on it. The problem is that to construct a timeline with all of the information that students need, you would have to devote a book to that alone, so most of these are just Geological Time Scales with some pretty pictures attached. The Geologic Time Spiral (below), showing Earth history spiraling away from the beginning of time, is a classic, and fascinating to look at, but of limited use to my students.  The durations of the events pictured are gross approximations, there is no description of those events, and there is no sense of the spatial changes that occurred. The timeline also glosses over the multiple story lines in Earth history, and the complex interconnections between story lines.

A spiral diagram illustrating the evolution of life on Earth through geological time

Geological Time Spiral: The names of units within the Geological Time Scale are written along the edges.

How to fix it

Make it adapt to scaling

What’s needed is a timeline in electronic format, but not just any timeline- it should be a scalable timeline. Users must be able to zoom out to see big-picture, long-term history, or zoom in to see the finer details. It would be the temporal analog to Google Maps, where the details which appear, including the divisions of the Geological Time Scale itself, depend on the scale. This would solve the problem of the necessarily limited amount of information in current timelines, but it would also do something more important. Users would be able to easily go back and forth between scales to understand how events are situated in a broader context. This is what you do every time you are planning a route to a new address- look at the larger map of the city to see the main thoroughfares, then zoom in to the streets within a particular neighbourhood, then zoom out again to remind yourself where the neighbourhood is relative to the freeway. Then you might zoom in again to the exact address, and depending on the tool you are using, you might look at a picture of the building that you are headed to.

Show cause and effect relationships

In Google Maps, you can see how streets are connected to each other. In Earth history, individual story lines are interconnected in the same way, and the complexity of city streets is probably not a bad analogy for the complexity of these interconnections. The scalable timeline would also show branches that link one story line to other stories, so a user could follow a single timeline, or choose to follow a branch and see how another series of events was impacted by the first story line. Because of how complex the interactions are, these branches would also have to appear or disappear depending on the scale, and depending on which timeline is being viewed.

Add multimedia

Like Google Maps, where resources like photographs, or information like phone numbers are linked to particular points in space, the timeline would have resources linked to a particular point in time, to a broader range of events, or to branches that connect related events. There could be pictures of the organisms that existed, or videos to explain a concept or expand on the details of an event. This would replace the limited images in the timelines that exist at present.

Add a responsive map

The scalable timeline should have an easy way to view the geographic location of a particular event, if it happens to occur in a specific place. This would require an omnipresent world map that lights up in the right spots to correspond to a particular event, but which also changes to reflect the shifting positions of the continents. The map would show where an event happened, but also where climate zones are, where glaciers are present, and where other key contemporaneous events occurred.

Get hypothetical

Hypothetical timelines could be introduced to consider alternative histories. For example, what would have happened if Earth had never been hit by an extraterrestrial object 65 million years ago? Would we even exist if mammals hadn’t been able to take over niches left open by the extinction of the dinosaurs? Or would the dinosaurs have gone extinct anyway for some other reason? Hypothetical timelines could be places to host discussions.

No more Brontosauruses

Brontosaurus illustration from 1896

Brontosaurus, redlined. Skeleton illustration appeared in “The Dinosaurs of North America” by O. C. Marsh (1896)

A timeline of this nature would be much easier to update as new data become available, or as the thinking about Earth history changes. In the popular Golden Guide to Fossils, which some of my students use, there still exists an entry for  Brontosaurus. Brontosauruses were invented by mistake in 1879 because Othniel Charles Marsh was in a rush to publish and didn’t realize that his new dinosaur find was just an adult version of a juvenile dinosaur he had already documented, called Apatosaurus.  The  iconic dinosaur that came to be known as Brontosaurus was actually Apatosaurus with the wrong head attached. The Brontosaurus story could be corrected with a few keystrokes and turned into a teachable moment about the challenges of interpreting paleontological data.

Why would it work?

It would work because narratives are better than lists. The standard timeline offers a way to summarize some of the events in Earth’s history, and to express temporal relationships as spatial ones, but it doesn’t go far enough to make the events into a meaningful whole. A list of seemingly isolated events is just that- a list. It takes context and meaning to make it a story, and stories are things we can remember and understand. There’s a reason you need to write down your grocery list to remember it, but you don’t need notes to be able to relate a relatively trivial story about what your dog did the other day. Whether you get all of the groceries you need or not will likely have a bigger impact on your life than if you can remember your dog story, but if you remember the dog story, it’s because it means something to you. A scalable timeline is a dog story rather than a grocery list because it will make it easy to examine the relationships between events in Earth history, and to synthesize essential details into a meaningful whole.

 

Categories: Learning technologies, Teaching strategies | Tags: , , , , | 2 Comments

Blinkie and the Valley of Confusion

One of my projects these days is MOOCery. MOOCs are Massive Open Online Courses- courses offered for free online, and open to everyone. The formats these courses take will vary, but they often include lectures on video, discussion forums, and assignments. I’m working on two courses right now, and it was very tempting to sign up for more.

One of the courses is Design and Development of Educational Technology, offered by the MIT. I’ve been curious about Ed Tech since taking Introduction to Learning Technologies at the University of Saskatchewan, and the course seemed a good opportunity for further exploration. Part of the course involves reflecting on both old and new educational technologies, and I have a bit of homework in that regard: comparing and contrasting a new and an old technology.

The New: Demystifying the Valley of Confusion

A simplified geological map

Geological map of the Valley of Confusion

First-year geology students are asked to do a very complex task: view a two-dimensional representation of the intersection of a complex geometric surface with three-dimensional subsurface structures (also called a geological map), and understand what the heck they are looking at.

Here’s an example of a simplified geological map. In this image, the coloured patches represent different rock layers that you would see if you could strip away all of the soil and expose the rocks beneath. If I were to ask you how those rock layers were arranged within the Earth, you might say that they were folded. It certainly looks that way. In fact, they are not folded at all. They are in flat layers, all tilting to the east at an angle of 30 degrees. Students are expected to arrive at that interpretation by looking at maps like this one.

The map in the image above is actually a birds-eye view of this:

Screen Shot 2014-10-21 at 11.57.55 PM

Valley of Confusion in three dimensions: not as confusing

Here you can see on the side of the block that the rocks are arranged in flat, tilted layers, not curved and folded ones. The reason they look folded on the map is that the surface is actually a valley. The numbered black and grey lines in the first image represent the elevations at different points in the valley.

This is why Visible Geology is so useful. It is an online tool that allows users to construct and view three-dimensional models of geological structures. Users start with a blank cube, then add layers to represent different rocks. They can manipulate the layers by tilting them, folding them, or faulting them. The cubes can be rotated to allow a view of all of the sides. Users can also print their models, cut them out, and fold them up into cubes. Visible Geology is also a good example of what Seymour Papert referred to as a low floor: it is very easy to get started with, and users get results immediately. I created both of the images above in under 5 minutes using Visible Geology.

Visible Geology is particularly interesting because it began as a project by a student who was learning about geological maps and geological structures. He happened to have programming skills in MATLAB which allowed him to build visualizations to help himself and his peers. From there, he developed Visible Geology into an online tool.

The goal of Visible Geology is to make it easy to visualize the three-dimensional structures formed by rocks. The lament I hear most often from my first-years is, “I just can’t see it!” Visible Geology solves that problem by allowing students to explore different configurations and scenarios. It is engaging because it has an interface that is user-friendly: it is colourful, it’s functions are intuitive, and it is not at all intimidating (unless you find large buttons with pastel-coloured illustrations intimidating). Students can learn from Visible Geology by experimenting, but would also benefit by attempting to reproduce the geological maps and structures in their assignments.

The Old: Blinkie Computes

owl calculator

Blinkie (The National Semiconductor Quiz Kid)

At some point in the early 1980’s, I received a National Semiconductor Quiz Kid as a gift. This toy (henceforth referred to as Blinkie) was a calculator that looked like an owl. Blinkie didn’t work like a regular calculator, though. When you entered a mathematical operation (“4 + 3 =”) he calculated the answer, but he wouldn’t tell you what it was. You would have to supply the answer. If your answer were correct, he would blink a green LED eye at you. If it were wrong, he winked a red LED eye. Blinkie came with a book of math questions, and was intended as a drill tool for children learning their pluses, take-aways, timeses and divide-bys.

I’m sure Blinkie was effective as a math teaching tool (I can add, after all), but that isn’t my main recollection of Blinkie. I liked Blinkie because the keys made a satisfying click when you pressed them. I liked Blinkie because if you turned out the lights and hid under the covers, then covered his eyes with your thumbs, the red eye would glow through your thumb, but the green one wouldn’t. Most of all, I liked Blinkie because I could use him to check my math homework, and not feel that I was cheating. So, although Blinkie was intended to teach me math (and perhaps save my mom some time making flash cards), his most substantial benefit was to reduce my anxiety. Once you do that, the math comes a lot easier anyway.

Blinkie Versus the Internet (or, Bringing An Owl To A Gunfight)

Comparing Blinkie to Visible Geology is not like comparing apples to oranges. Comparing apples to oranges is much easier than finding characteristics that Blinkie and Visible Geology share. They are very different tools.

For one thing, their approaches are very different. Blinkie was a tool for practicing math skills. He told you whether you got the answer right, or whether you got it wrong. Visible Geology is about exploring. It allows the user to be creative, and to experiment risk-free. It is about “what if?”

The motivation for creating these tools was also very different. I suspect that at least some of the motivation for building Blinkie was that new microprocessors had been developed, and that development had to be funded commercially. National Semiconductor had a hammer, and was looking for a nail. In contrast, Visible Geology was created by someone who experienced a need for a visualization tool, and built what he needed.

The most obvious contrast is the difference in technology, but it is also the least relevant. Blinkie is old technology, but back in the early 1980’s, he was pretty cool… heck, anything with lights and buttons was cool back then. The point is, he did his job, and I didn’t feel that I was missing out on anything. Today, as amazing as Visual Geology might be to a Blinkie-era person, it is nothing special technologically to the eighteen- to twenty-year-olds that I usually deal with. It does its job.

I was excited to discover both of these technologies, but for different reasons. Those reasons are related to context. I faced Blinkie as a learner. Blinkie was novel, and so was math. He and math were intertwined in a new tactile and visual experience. As far as I was concerned, Blinkie wasn’t for teaching me math, he was entertainment, and I just happened to be learning math at the same time. My perspective on Visible Geology is as a teacher. It is a tool that I’m excited about because it fills a definite need that my students have to see the three-dimensional structures they are working with. When I play with it, the purpose is to create a teachable object. There isn’t the same element of novelty and discovery as there was with Blinkie, back when math was something new.

I think that for my students, Visible Geology will be a Blinkie experience. They are discovering geology, and Visible Geology will be entertainment inseparable from learning. They can use it as a way to “check their homework” by comparing their expectations with the results of combining geological structures with different surfaces. It may even lessen the anxiety they feel when geological maps just aren’t making sense. Nevertheless, there is one thing they will be missing: Visible Geology will never make their thumbs glow.

 

 

Categories: Learning technologies | Tags: , , , , , | Leave a comment

Is plagiarism funny?

Generally I would say no, but I’ve tried to make an exception with a new video project.

A recent onslaught of assignments highlighted the futility (yet again) of what amounts to grading the textbook. My brain started churning out cartoons about the ridiculous ways students attempt to skirt the requirement of having to answer in their own words. Jeff Foxworthy’sYou might be a redneck if…” came to mind, and my productivity screeched to a halt: “It might not be in your own words if…” (Does that make Jeff Foxworthy my muse?)

The point was to get students thinking about plagiarism without taking a “thou shalt not” approach. I plan to build additional resources, including a video and/or handout with tips on how to answer in one’s own words. I like to point out that the textbook is one way to say something, but not the only way, and not necessarily the best way. And it isn’t about some pedantic exercise in avoiding a specific set of words- it’s about turning words on a page into knowledge… and that doesn’t happen unless you think about what those words mean.

This project is shorter than my last project, which could make the difference between students watching it and not. Another difference is that it consists of text, music, and my own drawings… so no fifteen takes required to get a voice-over without stumbling or stuttering. The drawings were the fun part. While I have at some pont generated drawings and paintings that look like actual objects and people in the real world, doing so quickly and consistently is another matter. I came up with scribble people after searching for examples of line figures that others have drawn, and then doing my best to create something else. At one time I would have opted for stick figures, but after discovering Randall Munroe’s brilliant webcomic, xkcd… well, you wouldn’t try to out-drip Jackson Pollock, now would you?

In the process of making this video, I learned some things that might come in handy for anyone trying a similar project.

Timing

If you’ve made the slides, then you know way more about them than a first-time viewer will, so you’re probably not the best judge of how fast the slides should move along. What worked great was having someone else advance through the slides using the “Rehearse” mode under the “Slideshow” tab in PowerPoint. This records the duration over which each slide is viewed. Not only did I get an idea of how much time viewers might need, it became very clear which slides would benefit from a redesign. Set the intervals for transitions between slides, and run it with “Use Timings” selected. Then it is a simple matter of starting and ending a screen recording.

Music

I am not musically astute. If you ask me about Country and Western music released between 1950 and 1969, or Tom Waits, or Leonard Cohen, I might be able to help you. Lyrics to “The Battle of New Orleans?” Got you covered. Otherwise, you’d best ask my husband, who has a much larger musical vocabulary, and likes to ask me “Who sings this?” when he knows full well I can’t answer. But I needed music, so what to do?

Where to get it

I learned that there is a lot of royalty-free music online, and a subset of that is free royalty-free music. There is the Free Music Archive, which, amongst other things, has recordings from Edison cylinders! How cool is that?! I also found Kevin MacLeod’s website, where he offers his music under a Creative Commons license. His music is searchable by genre as well as by “feel” (bright, bouncy, driving, mysterious, etc.). Each song comes with an excellent description, suitable for the musically challenged, which makes it clear what an appropriate context would be for that song.

How to use it

Odds are, your song and your video won’t be the same length. If the song is longer than your video, it is easy enough to fade out the volume at a convenient spot. If the song is shorter, it’s more difficult to maintain continuity. Some songs come with versions that are suitable for looping, or in versions of different lengths. You can buy a little extra time by delaying the start of the song slightly, and fading in the volume, and then fading out the volume at the end. You could perform audio surgery and create a Frankensong… but if amputating musical body parts and stitching them back together again isn’t for you, then throwing continuity out the window might be the better choice. I didn’t know any of these things when I started, and it took a lot of experimenting to get something I could live with. Hopefully villagers with pitchforks and torches won’t be a problem.

Categories: Challenges, Learning technologies | Tags: , , , , , , , | Leave a comment

The Poll Everywhere experiment: After day 15 of 15

The marathon geology class is over now, and I have a few observations about the Poll Everywhere experience. These things would have helped me had I known them in advance, so here they are in case someone else might benefit.  Some of the points below are also applicable to classroom response systems in general.

Getting started

Signing up the students

As I mentioned in a previous post, this went fairly smoothly.  One reason is that the process is an easy one, but another reason is that there were enough devices for students to share in order to access the website for creating an account and registering. While students can use an “old-fashioned” cell phone without a browser to text their answers, they can’t use that device to get set up in the first place. I used my iPad to get two of the students started, and students shared their laptop computers with each other. My class was small (33 students), so it was relatively easy to get everyone sorted.  If the class is a large one this could be a challenge. I would probably have the students sign up in advance of class, and then be willing to write off the first class for purposes of troubleshooting with those who couldn’t get the process to work for themselves.

Voter registration

One thing I would do differently is to have students register as a voter regardless of whether they plan to use their browser to respond to questions or not. I told the students who would be texting that all they needed to do was have their phone numbers certified. This is true, and they appeared on my list of participants. The problem has to do with the students who are responding using a web browser. If they forgot to sign in then they showed up on my list anonymously as an unregistered participant. More than one student did this, so it wasn’t possible to know which student entered which answers.

If everyone were registered as a voter, then I could have selected the option to allow only registered participants to answer the questions. Those not signed in would not be able to answer using their browsers, and they would be reminded that signing in was necessary. The reason I didn’t use this option is that students texting their answers are prevented from responding unless they have also registered as voters. I could have had them go back and change their settings, but I opted instead to put a message on the first question slide of each class in large, brightly coloured letters reminding students to sign in. I also reminded them verbally at the start of class.

Grading responses

With the Presenter plan students’ responses were automatically marked as correct or incorrect (assuming I remembered to indicate the correct answer). Under “Reports” I was able to select questions and have students’ responses to those questions listed, and a “yes” or “no” to whether they got the right answer. The reports can be downloaded as a spreadsheet, and they include columns showing how many questions were asked, how many the student answered, and how many the student got correct. There is a lot of information in the spreadsheet, so it isn’t as easy as I would have liked to get a quick sense of who was having difficulty with what kind of question. Deleting some columns helped to clarify things.

In the end I didn’t use the statistics that Poll Everywhere provided. I was having difficulty sorting out the questions that were for testing purposes from the ones that were for discussion purposes. Maybe a “D” or “T” at the beginning of each question would have made it easier to keep track of which was which when selecting questions for generating the report. I could have used the statistics if I had generated separate reports for the discussion questions and the testing questions. Instead I made myself a worksheet and did the calculations manually. This approach would not scale up well, but it did make it a lot easier for me to see how individual students were doing.

Integrity of testing

Timed responses

At the outset I decided that it would be extremely inconvenient to have students put their notes away every time they had to respond to a testing question. My solution was to limit the time they had to respond to testing questions. I figured that if they didn’t know the answer, that would at least restrict how much they flipped through their notes.  It also helps to ask questions where the answer isn’t something they can look up.   It turned out that 25 seconds was a good time limit, although they got longer than that because I took time to explain the question and the possible responses. (I wanted to make sure that if they got the answer wrong it reflected a gap in their knowledge rather than a misunderstanding of what the question was asking or what the responses meant.)

There is a timer that can be set.  One way to set it is when using the Poll Everywhere Presenter App… if you can manage to click on the timer before the toolbar pops up and gets in your way. (I never could.) It can also be set when viewing the question on the Poll Everywhere website. The timer starts when the question starts, which means you have to initiate the question at the right time, and can’t have it turned on in advance. With the work-around I was using, there were too many potential complications, so I avoided the timer and either used the stopwatch on my phone or counted hippopotamuses.

Setting the correct answer to display

If you set a question to be graded, students can see whether or not they got the correct answer, but you have options as to when they see it. I noticed that by default there is a one-day delay between when the question is asked and when the answer is shown (under “Settings” and “Response History”). I wanted the students to be able to review their answers on the same day if they were so inclined, so I set an option to allow the correct answer to be shown immediately. The problem, I later discovered, is that if one student responds and then checks his or her answer, he or she can pass on the correct answer to other students.

Ask a friend

Another issue with the integrity of testing done using Poll Everywhere (or any classroom response system) is the extent to which students consult with each other prior to responding. I could have been particular on this point, and forbidden conversation, but the task of policing the students wasn’t something I was keen on doing. Judging by the responses, conversing with one’s neighbour didn’t exclude the possibility of both students getting the answer wrong. In a large class it would be impossible to control communications between students, which is one of the reasons why any testing done using this method should probably represent only a small part of the total grade.

Who sees what when

There are two ways to turn a poll on, and they each do different things. To receive responses, the poll has to be started. To allow students to respond using their browsers, the poll has to be “pushed” to the dedicated website. It is possible to do one of these things without doing the other, and both have to be done for things to work properly. The tricky part is keeping track of what is being shown and what is not. If a question is for testing purposes then you probably don’t want it to be displayed before you ask it in class.

When you create a poll, it is automatically started (i.e., responses will be accepted), but not pushed. Somewhere in the flurry of setting switches I think I must have pushed some polls I didn’t intend to. I also noticed one morning as I was setting up polls that someone (listed as unregistered) had responded to a question I had created shortly before.   As far as I knew I hadn’t pushed the poll, so…?  The only explanation I can think of is that someone was responding to a different poll and texted the wrong number.  Anyway, as an extra precaution and also to catch any problems at the outset, I made the first question of the day a discussion question. Only one question shows at a time, so as long as the discussion question was up, none of the testing questions would be displayed.

Oops

One other thing to keep in mind is to check before asking a question that one hasn’t written the answer on the board. If the class suddenly goes very quiet and the responses come in as a flood, that’s probably what has happened.

Accommodating technology and life

Stuff happens. If a student misses class, he or she will also miss the questions and the points that could have been scored for answering them. If the absence is for an excusable reason (or even if it isn’t) a student might ask to make up the missed questions. As this would take the form of a one-on-one polling session, and the construction of a whole suite of new questions, I knew it was something I didn’t want to deal with.

One could simply not count the missed questions against the student’s grade, but that wasn’t a precedent I wanted to set either. Instead I stated in the syllabus that there would not be a make-up option, but that each student would have a 10-point “head start” for the Poll Everywhere questions. Whatever the student’s score at the end of the course, I added 10 points, up to a maximum of a 100% score. I had no idea how many questions I would be asking, so 10 points was just a guess, but it ended up covering the questions for one day’s absence, which is not unreasonable.

Another thing the 10 points was intended to do was offset any technological problems, like a student’s smart phone battery dying at an inopportune moment, or someone texting the wrong number by accident, or accidentally clicking the wrong box on the browser. The 10 points also covered miscalculations on my part, such as making a testing question too difficult.

I still ended up forgiving missed questions in two cases: one because of a scheduling conflict with another class, and the other on compassionate grounds.

The verdict

I will be teaching in September, and I plan to use Poll Everywhere again. Even if it happens that my classroom is outfitted with a receiver for clickers, I’ll still stay with Poll Everywhere.  For one, my questions are already set up, ready and waiting online. Another reason is the flexibility of being able to show a question without actually showing the poll (i.e., the window with questions and responses that the Poll Everywhere software creates). This started out as a “duct tape” fix for a technical problem, but in the end I think I prefer it because I have more control over what can fit on the slide. As far as I know, Turning Point questions (the usual clicker system) can’t be started unless the slide that will show the results is the current slide.

One more reason is that the system will be free for students to use, outside of whatever data charges they might incur. I will either cover the cost myself, or, if there is no Turning Point option, attempt to convince the school to do it. A plan exists where the students can pay to use the system, but I’d like to avoid that if possible. On the off chance that something goes horribly wrong and I can’t get it working again, I’d prefer to not have students wondering why they had to pay for a service that they can’t use.

Over all, I really like the idea of having a diagnostic tool for probing brains (also referred to as formative assessment, I think). I suppose my teaching process is similar to the one I use for debugging computer code: I perturb the system, observe the output, and use that to diagnose what the underlying problem might be. Poll Everywhere is not the only tool that can do this, but it is probably the one I will stick with.

Categories: Learning technologies | Tags: , , , , , | 1 Comment

The Poll Everywhere experiment: After day 3 of 15

Tech godsThis month I am teaching an introductory physical geology course that could be called “All you ever wanted to know about geology in 15 days.” It is condensed into the first quarter of the Spring term, and so compressed into 15 classes in May.

I decided to use an classroom response system this time. I like the idea of being able to peer into the black box that is my students’ learning process, and fix problems as they arise. I also like that I can challenge them with complex questions. Students get points for answering the really hard ones regardless of whether they get the right answer or not (and sometimes there is more than one reasonable answer).

Classroom response systems often involve the use of clickers, but my classroom doesn’t have a receiver, and I didn’t want to spend $250 to buy a portable one. Instead I decided to try Poll Everywhere. It is an online polling tool that can be used to present questions to students, collect their responses, display the frequency of each response, and, for a fee, tell me who answered what correctly.  An advantage of Poll Everywhere is that students can use the devices they already have to answer questions, either from a web browser or by sending text messages.

The obvious snag, that someone didn’t have the requisite technology, didn’t occur, and setting up the students was far easier than I thought it would be.  I’ve noticed that many are now texting their answers rather than using their browsers, even though most planned to use their browsers initially. None have asked for my help with getting set up for text messaging, and that would be an endorsement for any classroom technology in my books.

My experience with the service has not been as smooth. It is easy to create poll questions, but the window that pops up to show the poll isn’t as easy to read as I would like it to be. The main problem, however, is that I can’t actually show students the polls. Aside from one instance involving random button pushing that I haven’t been able to reproduce, the polls show up on my computer, but are simply not projected onto the screen at the front of the classroom. I’ve looked around online for a solution, but the only problem that is addressed is polls not showing up on PowerPoint slides at all, which is not my issue.  On the advice of Poll Everywhere I have updated a requisite app, but to no avail.

The work-around I’ve come up with is to make my own slides with poll questions and the possible responses. Normally, advancing to the slide on which the poll appears would trigger the poll. Instead I trigger and close the poll from my Poll Everywhere account using an iPad.  I haven’t yet tried exiting PowerPoint and showing the poll using the app, then going back to PowerPoint, because after I connect to the projector, I can’t seem to control the display other than to advance slides.

As a classroom tool, I have found the poll results to be useful already, and I was able to make some clarifications that I wouldn’t otherwise have known were necessary. I would like to look at the results in more detail to check on how each student is doing, but with all the time I’ve been spending on troubleshooting and building additional slides, I haven’t got to it yet.

It is possible that my technical problems are not caused by Poll Everywhere. All aspects of the polling system that are directly under their control have worked great. I’m curious whether I can get the polls to show up if I use a different projector, or whether other objects like videos would show on the projector I’m using now, but I have limited time to invest in experiments. This is where I’m supposed to say that I’ve learned my lesson and will henceforth test-drive new technology from every conceivable angle before actually attempting to use it in a way that matters. Only, I thought I had tested it: I ran polls in PowerPoint multiple times on my own computer, doing my best to find out what would make them not work and how to fix it. I also answered poll questions from a separate account using my computer, an iPad, and by texting to find out what the students’ experience would be and what challenges it might involve… but I never thought to check whether the projector would selectively omit the poll from the slide.  Who would have thought?

Categories: Learning technologies | Tags: , , , , , | 1 Comment

Blog at WordPress.com.