Posts Tagged With: classroom response system

The Poll Everywhere experiment: After day 15 of 15

The marathon geology class is over now, and I have a few observations about the Poll Everywhere experience. These things would have helped me had I known them in advance, so here they are in case someone else might benefit.  Some of the points below are also applicable to classroom response systems in general.

Getting started

Signing up the students

As I mentioned in a previous post, this went fairly smoothly.  One reason is that the process is an easy one, but another reason is that there were enough devices for students to share in order to access the website for creating an account and registering. While students can use an “old-fashioned” cell phone without a browser to text their answers, they can’t use that device to get set up in the first place. I used my iPad to get two of the students started, and students shared their laptop computers with each other. My class was small (33 students), so it was relatively easy to get everyone sorted.  If the class is a large one this could be a challenge. I would probably have the students sign up in advance of class, and then be willing to write off the first class for purposes of troubleshooting with those who couldn’t get the process to work for themselves.

Voter registration

One thing I would do differently is to have students register as a voter regardless of whether they plan to use their browser to respond to questions or not. I told the students who would be texting that all they needed to do was have their phone numbers certified. This is true, and they appeared on my list of participants. The problem has to do with the students who are responding using a web browser. If they forgot to sign in then they showed up on my list anonymously as an unregistered participant. More than one student did this, so it wasn’t possible to know which student entered which answers.

If everyone were registered as a voter, then I could have selected the option to allow only registered participants to answer the questions. Those not signed in would not be able to answer using their browsers, and they would be reminded that signing in was necessary. The reason I didn’t use this option is that students texting their answers are prevented from responding unless they have also registered as voters. I could have had them go back and change their settings, but I opted instead to put a message on the first question slide of each class in large, brightly coloured letters reminding students to sign in. I also reminded them verbally at the start of class.

Grading responses

With the Presenter plan students’ responses were automatically marked as correct or incorrect (assuming I remembered to indicate the correct answer). Under “Reports” I was able to select questions and have students’ responses to those questions listed, and a “yes” or “no” to whether they got the right answer. The reports can be downloaded as a spreadsheet, and they include columns showing how many questions were asked, how many the student answered, and how many the student got correct. There is a lot of information in the spreadsheet, so it isn’t as easy as I would have liked to get a quick sense of who was having difficulty with what kind of question. Deleting some columns helped to clarify things.

In the end I didn’t use the statistics that Poll Everywhere provided. I was having difficulty sorting out the questions that were for testing purposes from the ones that were for discussion purposes. Maybe a “D” or “T” at the beginning of each question would have made it easier to keep track of which was which when selecting questions for generating the report. I could have used the statistics if I had generated separate reports for the discussion questions and the testing questions. Instead I made myself a worksheet and did the calculations manually. This approach would not scale up well, but it did make it a lot easier for me to see how individual students were doing.

Integrity of testing

Timed responses

At the outset I decided that it would be extremely inconvenient to have students put their notes away every time they had to respond to a testing question. My solution was to limit the time they had to respond to testing questions. I figured that if they didn’t know the answer, that would at least restrict how much they flipped through their notes.  It also helps to ask questions where the answer isn’t something they can look up.   It turned out that 25 seconds was a good time limit, although they got longer than that because I took time to explain the question and the possible responses. (I wanted to make sure that if they got the answer wrong it reflected a gap in their knowledge rather than a misunderstanding of what the question was asking or what the responses meant.)

There is a timer that can be set.  One way to set it is when using the Poll Everywhere Presenter App… if you can manage to click on the timer before the toolbar pops up and gets in your way. (I never could.) It can also be set when viewing the question on the Poll Everywhere website. The timer starts when the question starts, which means you have to initiate the question at the right time, and can’t have it turned on in advance. With the work-around I was using, there were too many potential complications, so I avoided the timer and either used the stopwatch on my phone or counted hippopotamuses.

Setting the correct answer to display

If you set a question to be graded, students can see whether or not they got the correct answer, but you have options as to when they see it. I noticed that by default there is a one-day delay between when the question is asked and when the answer is shown (under “Settings” and “Response History”). I wanted the students to be able to review their answers on the same day if they were so inclined, so I set an option to allow the correct answer to be shown immediately. The problem, I later discovered, is that if one student responds and then checks his or her answer, he or she can pass on the correct answer to other students.

Ask a friend

Another issue with the integrity of testing done using Poll Everywhere (or any classroom response system) is the extent to which students consult with each other prior to responding. I could have been particular on this point, and forbidden conversation, but the task of policing the students wasn’t something I was keen on doing. Judging by the responses, conversing with one’s neighbour didn’t exclude the possibility of both students getting the answer wrong. In a large class it would be impossible to control communications between students, which is one of the reasons why any testing done using this method should probably represent only a small part of the total grade.

Who sees what when

There are two ways to turn a poll on, and they each do different things. To receive responses, the poll has to be started. To allow students to respond using their browsers, the poll has to be “pushed” to the dedicated website. It is possible to do one of these things without doing the other, and both have to be done for things to work properly. The tricky part is keeping track of what is being shown and what is not. If a question is for testing purposes then you probably don’t want it to be displayed before you ask it in class.

When you create a poll, it is automatically started (i.e., responses will be accepted), but not pushed. Somewhere in the flurry of setting switches I think I must have pushed some polls I didn’t intend to. I also noticed one morning as I was setting up polls that someone (listed as unregistered) had responded to a question I had created shortly before.   As far as I knew I hadn’t pushed the poll, so…?  The only explanation I can think of is that someone was responding to a different poll and texted the wrong number.  Anyway, as an extra precaution and also to catch any problems at the outset, I made the first question of the day a discussion question. Only one question shows at a time, so as long as the discussion question was up, none of the testing questions would be displayed.

Oops

One other thing to keep in mind is to check before asking a question that one hasn’t written the answer on the board. If the class suddenly goes very quiet and the responses come in as a flood, that’s probably what has happened.

Accommodating technology and life

Stuff happens. If a student misses class, he or she will also miss the questions and the points that could have been scored for answering them. If the absence is for an excusable reason (or even if it isn’t) a student might ask to make up the missed questions. As this would take the form of a one-on-one polling session, and the construction of a whole suite of new questions, I knew it was something I didn’t want to deal with.

One could simply not count the missed questions against the student’s grade, but that wasn’t a precedent I wanted to set either. Instead I stated in the syllabus that there would not be a make-up option, but that each student would have a 10-point “head start” for the Poll Everywhere questions. Whatever the student’s score at the end of the course, I added 10 points, up to a maximum of a 100% score. I had no idea how many questions I would be asking, so 10 points was just a guess, but it ended up covering the questions for one day’s absence, which is not unreasonable.

Another thing the 10 points was intended to do was offset any technological problems, like a student’s smart phone battery dying at an inopportune moment, or someone texting the wrong number by accident, or accidentally clicking the wrong box on the browser. The 10 points also covered miscalculations on my part, such as making a testing question too difficult.

I still ended up forgiving missed questions in two cases: one because of a scheduling conflict with another class, and the other on compassionate grounds.

The verdict

I will be teaching in September, and I plan to use Poll Everywhere again. Even if it happens that my classroom is outfitted with a receiver for clickers, I’ll still stay with Poll Everywhere.  For one, my questions are already set up, ready and waiting online. Another reason is the flexibility of being able to show a question without actually showing the poll (i.e., the window with questions and responses that the Poll Everywhere software creates). This started out as a “duct tape” fix for a technical problem, but in the end I think I prefer it because I have more control over what can fit on the slide. As far as I know, Turning Point questions (the usual clicker system) can’t be started unless the slide that will show the results is the current slide.

One more reason is that the system will be free for students to use, outside of whatever data charges they might incur. I will either cover the cost myself, or, if there is no Turning Point option, attempt to convince the school to do it. A plan exists where the students can pay to use the system, but I’d like to avoid that if possible. On the off chance that something goes horribly wrong and I can’t get it working again, I’d prefer to not have students wondering why they had to pay for a service that they can’t use.

Over all, I really like the idea of having a diagnostic tool for probing brains (also referred to as formative assessment, I think). I suppose my teaching process is similar to the one I use for debugging computer code: I perturb the system, observe the output, and use that to diagnose what the underlying problem might be. Poll Everywhere is not the only tool that can do this, but it is probably the one I will stick with.

Categories: Learning technologies | Tags: , , , , , | 1 Comment

The Poll Everywhere experiment: After day 3 of 15

Tech godsThis month I am teaching an introductory physical geology course that could be called “All you ever wanted to know about geology in 15 days.” It is condensed into the first quarter of the Spring term, and so compressed into 15 classes in May.

I decided to use an classroom response system this time. I like the idea of being able to peer into the black box that is my students’ learning process, and fix problems as they arise. I also like that I can challenge them with complex questions. Students get points for answering the really hard ones regardless of whether they get the right answer or not (and sometimes there is more than one reasonable answer).

Classroom response systems often involve the use of clickers, but my classroom doesn’t have a receiver, and I didn’t want to spend $250 to buy a portable one. Instead I decided to try Poll Everywhere. It is an online polling tool that can be used to present questions to students, collect their responses, display the frequency of each response, and, for a fee, tell me who answered what correctly.  An advantage of Poll Everywhere is that students can use the devices they already have to answer questions, either from a web browser or by sending text messages.

The obvious snag, that someone didn’t have the requisite technology, didn’t occur, and setting up the students was far easier than I thought it would be.  I’ve noticed that many are now texting their answers rather than using their browsers, even though most planned to use their browsers initially. None have asked for my help with getting set up for text messaging, and that would be an endorsement for any classroom technology in my books.

My experience with the service has not been as smooth. It is easy to create poll questions, but the window that pops up to show the poll isn’t as easy to read as I would like it to be. The main problem, however, is that I can’t actually show students the polls. Aside from one instance involving random button pushing that I haven’t been able to reproduce, the polls show up on my computer, but are simply not projected onto the screen at the front of the classroom. I’ve looked around online for a solution, but the only problem that is addressed is polls not showing up on PowerPoint slides at all, which is not my issue.  On the advice of Poll Everywhere I have updated a requisite app, but to no avail.

The work-around I’ve come up with is to make my own slides with poll questions and the possible responses. Normally, advancing to the slide on which the poll appears would trigger the poll. Instead I trigger and close the poll from my Poll Everywhere account using an iPad.  I haven’t yet tried exiting PowerPoint and showing the poll using the app, then going back to PowerPoint, because after I connect to the projector, I can’t seem to control the display other than to advance slides.

As a classroom tool, I have found the poll results to be useful already, and I was able to make some clarifications that I wouldn’t otherwise have known were necessary. I would like to look at the results in more detail to check on how each student is doing, but with all the time I’ve been spending on troubleshooting and building additional slides, I haven’t got to it yet.

It is possible that my technical problems are not caused by Poll Everywhere. All aspects of the polling system that are directly under their control have worked great. I’m curious whether I can get the polls to show up if I use a different projector, or whether other objects like videos would show on the projector I’m using now, but I have limited time to invest in experiments. This is where I’m supposed to say that I’ve learned my lesson and will henceforth test-drive new technology from every conceivable angle before actually attempting to use it in a way that matters. Only, I thought I had tested it: I ran polls in PowerPoint multiple times on my own computer, doing my best to find out what would make them not work and how to fix it. I also answered poll questions from a separate account using my computer, an iPad, and by texting to find out what the students’ experience would be and what challenges it might involve… but I never thought to check whether the projector would selectively omit the poll from the slide.  Who would have thought?

Categories: Learning technologies | Tags: , , , , , | 1 Comment

Blog at WordPress.com.