The marathon geology class is over now, and I have a few observations about the Poll Everywhere experience. These things would have helped me had I known them in advance, so here they are in case someone else might benefit. Some of the points below are also applicable to classroom response systems in general.
Signing up the students
As I mentioned in a previous post, this went fairly smoothly. One reason is that the process is an easy one, but another reason is that there were enough devices for students to share in order to access the website for creating an account and registering. While students can use an “old-fashioned” cell phone without a browser to text their answers, they can’t use that device to get set up in the first place. I used my iPad to get two of the students started, and students shared their laptop computers with each other. My class was small (33 students), so it was relatively easy to get everyone sorted. If the class is a large one this could be a challenge. I would probably have the students sign up in advance of class, and then be willing to write off the first class for purposes of troubleshooting with those who couldn’t get the process to work for themselves.
One thing I would do differently is to have students register as a voter regardless of whether they plan to use their browser to respond to questions or not. I told the students who would be texting that all they needed to do was have their phone numbers certified. This is true, and they appeared on my list of participants. The problem has to do with the students who are responding using a web browser. If they forgot to sign in then they showed up on my list anonymously as an unregistered participant. More than one student did this, so it wasn’t possible to know which student entered which answers.
If everyone were registered as a voter, then I could have selected the option to allow only registered participants to answer the questions. Those not signed in would not be able to answer using their browsers, and they would be reminded that signing in was necessary. The reason I didn’t use this option is that students texting their answers are prevented from responding unless they have also registered as voters. I could have had them go back and change their settings, but I opted instead to put a message on the first question slide of each class in large, brightly coloured letters reminding students to sign in. I also reminded them verbally at the start of class.
With the Presenter plan students’ responses were automatically marked as correct or incorrect (assuming I remembered to indicate the correct answer). Under “Reports” I was able to select questions and have students’ responses to those questions listed, and a “yes” or “no” to whether they got the right answer. The reports can be downloaded as a spreadsheet, and they include columns showing how many questions were asked, how many the student answered, and how many the student got correct. There is a lot of information in the spreadsheet, so it isn’t as easy as I would have liked to get a quick sense of who was having difficulty with what kind of question. Deleting some columns helped to clarify things.
In the end I didn’t use the statistics that Poll Everywhere provided. I was having difficulty sorting out the questions that were for testing purposes from the ones that were for discussion purposes. Maybe a “D” or “T” at the beginning of each question would have made it easier to keep track of which was which when selecting questions for generating the report. I could have used the statistics if I had generated separate reports for the discussion questions and the testing questions. Instead I made myself a worksheet and did the calculations manually. This approach would not scale up well, but it did make it a lot easier for me to see how individual students were doing.
Integrity of testing
At the outset I decided that it would be extremely inconvenient to have students put their notes away every time they had to respond to a testing question. My solution was to limit the time they had to respond to testing questions. I figured that if they didn’t know the answer, that would at least restrict how much they flipped through their notes. It also helps to ask questions where the answer isn’t something they can look up. It turned out that 25 seconds was a good time limit, although they got longer than that because I took time to explain the question and the possible responses. (I wanted to make sure that if they got the answer wrong it reflected a gap in their knowledge rather than a misunderstanding of what the question was asking or what the responses meant.)
There is a timer that can be set. One way to set it is when using the Poll Everywhere Presenter App… if you can manage to click on the timer before the toolbar pops up and gets in your way. (I never could.) It can also be set when viewing the question on the Poll Everywhere website. The timer starts when the question starts, which means you have to initiate the question at the right time, and can’t have it turned on in advance. With the work-around I was using, there were too many potential complications, so I avoided the timer and either used the stopwatch on my phone or counted hippopotamuses.
Setting the correct answer to display
If you set a question to be graded, students can see whether or not they got the correct answer, but you have options as to when they see it. I noticed that by default there is a one-day delay between when the question is asked and when the answer is shown (under “Settings” and “Response History”). I wanted the students to be able to review their answers on the same day if they were so inclined, so I set an option to allow the correct answer to be shown immediately. The problem, I later discovered, is that if one student responds and then checks his or her answer, he or she can pass on the correct answer to other students.
Ask a friend
Another issue with the integrity of testing done using Poll Everywhere (or any classroom response system) is the extent to which students consult with each other prior to responding. I could have been particular on this point, and forbidden conversation, but the task of policing the students wasn’t something I was keen on doing. Judging by the responses, conversing with one’s neighbour didn’t exclude the possibility of both students getting the answer wrong. In a large class it would be impossible to control communications between students, which is one of the reasons why any testing done using this method should probably represent only a small part of the total grade.
Who sees what when
There are two ways to turn a poll on, and they each do different things. To receive responses, the poll has to be started. To allow students to respond using their browsers, the poll has to be “pushed” to the dedicated website. It is possible to do one of these things without doing the other, and both have to be done for things to work properly. The tricky part is keeping track of what is being shown and what is not. If a question is for testing purposes then you probably don’t want it to be displayed before you ask it in class.
When you create a poll, it is automatically started (i.e., responses will be accepted), but not pushed. Somewhere in the flurry of setting switches I think I must have pushed some polls I didn’t intend to. I also noticed one morning as I was setting up polls that someone (listed as unregistered) had responded to a question I had created shortly before. As far as I knew I hadn’t pushed the poll, so…? The only explanation I can think of is that someone was responding to a different poll and texted the wrong number. Anyway, as an extra precaution and also to catch any problems at the outset, I made the first question of the day a discussion question. Only one question shows at a time, so as long as the discussion question was up, none of the testing questions would be displayed.
One other thing to keep in mind is to check before asking a question that one hasn’t written the answer on the board. If the class suddenly goes very quiet and the responses come in as a flood, that’s probably what has happened.
Accommodating technology and life
Stuff happens. If a student misses class, he or she will also miss the questions and the points that could have been scored for answering them. If the absence is for an excusable reason (or even if it isn’t) a student might ask to make up the missed questions. As this would take the form of a one-on-one polling session, and the construction of a whole suite of new questions, I knew it was something I didn’t want to deal with.
One could simply not count the missed questions against the student’s grade, but that wasn’t a precedent I wanted to set either. Instead I stated in the syllabus that there would not be a make-up option, but that each student would have a 10-point “head start” for the Poll Everywhere questions. Whatever the student’s score at the end of the course, I added 10 points, up to a maximum of a 100% score. I had no idea how many questions I would be asking, so 10 points was just a guess, but it ended up covering the questions for one day’s absence, which is not unreasonable.
Another thing the 10 points was intended to do was offset any technological problems, like a student’s smart phone battery dying at an inopportune moment, or someone texting the wrong number by accident, or accidentally clicking the wrong box on the browser. The 10 points also covered miscalculations on my part, such as making a testing question too difficult.
I still ended up forgiving missed questions in two cases: one because of a scheduling conflict with another class, and the other on compassionate grounds.
I will be teaching in September, and I plan to use Poll Everywhere again. Even if it happens that my classroom is outfitted with a receiver for clickers, I’ll still stay with Poll Everywhere. For one, my questions are already set up, ready and waiting online. Another reason is the flexibility of being able to show a question without actually showing the poll (i.e., the window with questions and responses that the Poll Everywhere software creates). This started out as a “duct tape” fix for a technical problem, but in the end I think I prefer it because I have more control over what can fit on the slide. As far as I know, Turning Point questions (the usual clicker system) can’t be started unless the slide that will show the results is the current slide.
One more reason is that the system will be free for students to use, outside of whatever data charges they might incur. I will either cover the cost myself, or, if there is no Turning Point option, attempt to convince the school to do it. A plan exists where the students can pay to use the system, but I’d like to avoid that if possible. On the off chance that something goes horribly wrong and I can’t get it working again, I’d prefer to not have students wondering why they had to pay for a service that they can’t use.
Over all, I really like the idea of having a diagnostic tool for probing brains (also referred to as formative assessment, I think). I suppose my teaching process is similar to the one I use for debugging computer code: I perturb the system, observe the output, and use that to diagnose what the underlying problem might be. Poll Everywhere is not the only tool that can do this, but it is probably the one I will stick with.