Posts Tagged With: Poll Everywhere

Dear Ed Tech: This Is What You Don’t Understand About Higher Education

I am the kind of tired that makes you feel hollow inside, so maybe this isn’t the best time to be writing this, but then again, maybe it is. I just got back from my Monday-Tuesday teaching overnighter out of town. I’m a hired gun in the world of higher education- sometimes we’re called adjunct faculty, sometimes sessional lecturers, and a number of other terms that are beyond my ability to recall at the moment. But you know who we are.

The problem is that being able to learn about educational technologies is really a luxury for my lot. I’ve been able to take many free courses which I’ve enjoyed very much, but I was only able to take them because I could afford to not fill that time with paid work. Full-time faculty on campus who opt to attend a course are doing so during the work day, but hired guns do it on their own time. Many of my colleagues simply wouldn’t be able to take the time- I’m thinking of you, Elaine, with your 8 courses this term in at least three different communities. So the first thing you need to know, Ed Tech, is that a substantial number of the people teaching courses at universities are hired guns like me, and many of those are on the razor’s edge of being able to support their teaching habits.

Part of being a hired gun is not having job security. You should care about this, Ed Tech, because the many wonderful tools you offer require a lot of work up-front. It’s a big decision whether or not to use a technology when learning it and preparing materials happens on your own time. It’s an even bigger decision when access to a tool depends on your employment status, as it often does with institutional subscriptions to software.

My blog, for example, started out on a university WordPress service, but after the jarring experience of having my computing access cut off between contracts, and facing the loss of the materials I created, I moved it and absorbed the costs associated with making it ad-free.

The same university is working on updating their in-class response system. I’m using one now- Poll Everywhere, which also happens to be something I can afford out-of-pocket- and the chance that I would adopt the system they choose is zero. It doesn’t matter how good the system is. What matters is that it takes a lot of time to set up questions and to embed them into presentations. Is it worth spending the time if I only get to use those questions once, or, assuming I’m teaching a similar class elsewhere, am unable to access them? This more or less guarantees that whatever system the university chooses will be utilized far less than they would like.

I came face to face with this issue more recently when discussing a home for the open textbook adaptation I’m working on. First of all, I’ve spent 131 hours on this adaptation so far, according to the timer I use to track my various ill-advised activities. That doesn’t include the 65 hours I spent writing a chapter for the original version of the book (for which, I must add, I was compensated- something I appreciated as an acknowledgement of my work as much as for the income.).

My free Pressbooks account didn’t have enough space for the media library, so I upgraded at my own expense. I then learned that the university is setting up its own version of Pressbooks, but faced with the possibility of losing access to what now seems like a ridiculous amount of work, I would never consider using their account to work on my textbook. I would also be nervous about having my students use a version hosted on the university’s system because I’m not clear on whether I would have access to edit it once it got put there. (I have no idea how authors of print materials aren’t driven nuts by being unable to edit at will.)

In my present state of near-faceplant exhaustion, it appears that I’ve made a great many poor life-choices. I can justify this in my better moments as things that are important to do for my students, but on days like today, all I can think of is why oh why am I killing myself with this?

Ed Tech, you need to realize that many of the people teaching in higher education are not in a position to be as frivolous with their time as I have been. In the push to get instructors to adopt various kinds of educational technology, it isn’t just a matter of convincing them that it’s good for students. They very likely know that already. The challenge is convincing them that they should commit to a technology in spite of the personal and financial burden, not to mention being treated like the education version of a paper plate (it works, it’s cheap, it’s disposable, there are lots more where it came from) by the schools that would benefit from their labour.

The commitment you’re asking for isn’t the same as it would be for full-time faculty, and I don’t think you realize how frustrating- even insulting- it is when you discuss the problem of adoption in terms of instructors being resistant to change, too lazy to change, or just not getting it. Especially when you yourselves are comfortably ensconced in a full-time position. For hired guns like me, the only compensation is warm fuzzies. When you’re a dead-inside kind of tired, warm fuzzies are entirely inadequate.

Categories: Challenges, Learning technologies, Textbooks | Tags: , , , , , , , , , , | Leave a comment

Crowd-sourcing distance education (or, Why Athabasca University’s problems are just getting started)

mortar quoteLast week there was another missive from Athabasca University interim president, Peter MacKinnon. The post appears to be damage control after a Metro article by Jeremey Nolais, Fears arise that Alberta’s Athabasca University will be lost as tough budget looms.  The post says that while Athabasca is facing “financial challenges that are acute,” and “a decline in the rate of enrollment growth,” the rumors in the media that it will be merged with another school or shut down are untrue.

What I found interesting about the article were the comments. There are only 6 of them at this point, and three of the comments were complaints about insufficient interaction with tutors. They weren’t complaints about the call centre model, where students contact the call center and a call center employee determines whether an Academic Expert should be informed that the student needs assistance (contact ensuing within 2 business days). Instead they were complaints by students who had tutors but felt that they weren’t hearing from the tutors enough. As one student says, “… I did not pay to be completely ignored and paid to be TAUGHT.” [original emphasis]. There were complaints about the quality of education, and the blame for that laid at the feet of tutors as well.

Given the controversy surrounding the call center, and the seemingly obvious thing to comment about- that those who were unhappy with insufficient contact from their tutors could rest assured that they would soon have no contact with a tutor whatsoever- it is surprising that no comments of that nature were posted. After some experimentation, I determined that those points of view were being moderated out. The upshot is that readers will come away with the conclusion that what is wrong with Athabasca is its tutors, which is very convenient for the administration at present. However, there is also a very real risk of discouraging students who might otherwise register for courses that are still running under the tutor model. If someone at AU thought that was a risk worth taking… well, make of that what you will.

I’m not saying that all tutors do the job that students want them to- some tutors may not do the best job they can for any number of reasons, and some students may have bought into the misconception that they have a full-time teacher assigned to them.

But Athabasca’s problem isn’t tutors not doing what students want them to. Its problem is a structure designed in the days when distance education required sending students boxes of paper in the mail. This is a problem because the system that can most efficiently manage hard-copy course materials is one in which teachers cannot have the autonomy to alter their course materials at will to respond to students’ needs. Think of the nightmare that keeping track of document versions would be! There would be no control on the workflow (and therefore on costs) if instructors could alter materials whenever they found a better way to teach. In order for the school to function, teaching has to take a back seat to operations.

kids these days

Kids these days

On the surface, it would seem that Athabasca has moved past this, with an online learning management system, online exams, and digital textbooks. At its core, however, Athabasca is still structured so that it is necessary to inhibit its teachers in order for it to operate as efficiently as possible. The reason I think that Athabasca’s problems are just beginning is that the costly and harmful structure they are fighting to protect is rapidly becoming unnecessary for distance education.

Think of it this way- these days most universities run courses through online learning management systems. Using these systems, instructors can post documents, set up quizzes and exams, post video links, post videos and podcasts of lectures, host class discussions, and more. It is entirely doable with a very minimal outlay for me to broadcast my in-class lectures live online, and have students answer questions in real time through Poll Everywhere while watching that lecture. We could discuss their answers as a group, and I could adjust my lecture on the fly to address issues that they were having. Students could even submit questions through Twitter during the lecture.

With courses set up this way, no-one sends students a box of paper course-materials. Students download and print what they want, access the rest online, and purchase textbooks directly from vendors in the format they want. Students can take a course and engage fully with a community of learners and an instructor without being in the classroom, whether that course is designated as distance education or not.

In contrast, Athabasca is structurally incapable of empowering its front-line teaching staff to act in their students’ best interest. It has people to grade papers and answer questions, but it doesn’t afford those people the mantle of teachers, the salary so they can be committed to students full-time, or the autonomy to fix issues with courses as they arise. Consider this: I don’t have access to the course materials that I wrote.  If I want to fix a typo, there is a separate group of people who handle that sort of thing, and I have to make a request to get it done. I had to hunt around to find out who those people were. If I had the same control over my Athabasca courses as I do with some other courses I teach, I would just take the three seconds to fix the typo myself and not tie up IT people, and who knows who else. I would likely be updating the course regularly to improve it, which means that a separate expenditure on a Subject Matter Expert (who is also me) to revise the course every so many years would be unnecessary.

From a business perspective, it might have been safe at one time to compromise on teaching if you were the only game in town that could mail out those boxes of paper. But what happens when mailing out boxes of paper becomes irrelevant to serving students at a distance? What happens when the competition is no longer other distance education schools or programs- when it becomes hundreds or even thousands of individual creative, energetic, and innovative instructors at traditional brick-and-mortar schools who choose to build and manage their own online courses? What happens when the additional cost of running those courses is trivial, because the resources are already there as part of how on-site students are served? Well, what happens is that the competition is essentially crowd-sourced, and can do a better job with lower costs and happier teachers.

I don’t know what will become of Athabasca. As long as it offers programs that no-one else does, there will be a demand for its product, and perhaps it will begin to focus on that segment of the market instead of a broad swath of undergraduate courses. But if it does offer programs that no-one else does, that will have more to do with no-one else choosing to offer those programs, rather than being unable to do so in a cost-effective manner. Athabasca will not change the way it does business because it is firmly committed to the notion that as long as the school is run as a business, the rest will take care of itself.  The call centre model- where by design, the first person students talk to will never be the person teaching them- is evidence of that. There is an entrenched culture which holds front-line teachers in such low regard as to view answer databases and non-teaching call-center employees as a better alternative.  This exists because at some level, Athabasca views itself as an organization for delivering courses rather than for teaching students.

Categories: Distance education and e-learning, Learning technologies, The business of education | Tags: , , , , , , , , | Leave a comment

The Poll Everywhere experiment: After day 15 of 15

The marathon geology class is over now, and I have a few observations about the Poll Everywhere experience. These things would have helped me had I known them in advance, so here they are in case someone else might benefit.  Some of the points below are also applicable to classroom response systems in general.

Getting started

Signing up the students

As I mentioned in a previous post, this went fairly smoothly.  One reason is that the process is an easy one, but another reason is that there were enough devices for students to share in order to access the website for creating an account and registering. While students can use an “old-fashioned” cell phone without a browser to text their answers, they can’t use that device to get set up in the first place. I used my iPad to get two of the students started, and students shared their laptop computers with each other. My class was small (33 students), so it was relatively easy to get everyone sorted.  If the class is a large one this could be a challenge. I would probably have the students sign up in advance of class, and then be willing to write off the first class for purposes of troubleshooting with those who couldn’t get the process to work for themselves.

Voter registration

One thing I would do differently is to have students register as a voter regardless of whether they plan to use their browser to respond to questions or not. I told the students who would be texting that all they needed to do was have their phone numbers certified. This is true, and they appeared on my list of participants. The problem has to do with the students who are responding using a web browser. If they forgot to sign in then they showed up on my list anonymously as an unregistered participant. More than one student did this, so it wasn’t possible to know which student entered which answers.

If everyone were registered as a voter, then I could have selected the option to allow only registered participants to answer the questions. Those not signed in would not be able to answer using their browsers, and they would be reminded that signing in was necessary. The reason I didn’t use this option is that students texting their answers are prevented from responding unless they have also registered as voters. I could have had them go back and change their settings, but I opted instead to put a message on the first question slide of each class in large, brightly coloured letters reminding students to sign in. I also reminded them verbally at the start of class.

Grading responses

With the Presenter plan students’ responses were automatically marked as correct or incorrect (assuming I remembered to indicate the correct answer). Under “Reports” I was able to select questions and have students’ responses to those questions listed, and a “yes” or “no” to whether they got the right answer. The reports can be downloaded as a spreadsheet, and they include columns showing how many questions were asked, how many the student answered, and how many the student got correct. There is a lot of information in the spreadsheet, so it isn’t as easy as I would have liked to get a quick sense of who was having difficulty with what kind of question. Deleting some columns helped to clarify things.

In the end I didn’t use the statistics that Poll Everywhere provided. I was having difficulty sorting out the questions that were for testing purposes from the ones that were for discussion purposes. Maybe a “D” or “T” at the beginning of each question would have made it easier to keep track of which was which when selecting questions for generating the report. I could have used the statistics if I had generated separate reports for the discussion questions and the testing questions. Instead I made myself a worksheet and did the calculations manually. This approach would not scale up well, but it did make it a lot easier for me to see how individual students were doing.

Integrity of testing

Timed responses

At the outset I decided that it would be extremely inconvenient to have students put their notes away every time they had to respond to a testing question. My solution was to limit the time they had to respond to testing questions. I figured that if they didn’t know the answer, that would at least restrict how much they flipped through their notes.  It also helps to ask questions where the answer isn’t something they can look up.   It turned out that 25 seconds was a good time limit, although they got longer than that because I took time to explain the question and the possible responses. (I wanted to make sure that if they got the answer wrong it reflected a gap in their knowledge rather than a misunderstanding of what the question was asking or what the responses meant.)

There is a timer that can be set.  One way to set it is when using the Poll Everywhere Presenter App… if you can manage to click on the timer before the toolbar pops up and gets in your way. (I never could.) It can also be set when viewing the question on the Poll Everywhere website. The timer starts when the question starts, which means you have to initiate the question at the right time, and can’t have it turned on in advance. With the work-around I was using, there were too many potential complications, so I avoided the timer and either used the stopwatch on my phone or counted hippopotamuses.

Setting the correct answer to display

If you set a question to be graded, students can see whether or not they got the correct answer, but you have options as to when they see it. I noticed that by default there is a one-day delay between when the question is asked and when the answer is shown (under “Settings” and “Response History”). I wanted the students to be able to review their answers on the same day if they were so inclined, so I set an option to allow the correct answer to be shown immediately. The problem, I later discovered, is that if one student responds and then checks his or her answer, he or she can pass on the correct answer to other students.

Ask a friend

Another issue with the integrity of testing done using Poll Everywhere (or any classroom response system) is the extent to which students consult with each other prior to responding. I could have been particular on this point, and forbidden conversation, but the task of policing the students wasn’t something I was keen on doing. Judging by the responses, conversing with one’s neighbour didn’t exclude the possibility of both students getting the answer wrong. In a large class it would be impossible to control communications between students, which is one of the reasons why any testing done using this method should probably represent only a small part of the total grade.

Who sees what when

There are two ways to turn a poll on, and they each do different things. To receive responses, the poll has to be started. To allow students to respond using their browsers, the poll has to be “pushed” to the dedicated website. It is possible to do one of these things without doing the other, and both have to be done for things to work properly. The tricky part is keeping track of what is being shown and what is not. If a question is for testing purposes then you probably don’t want it to be displayed before you ask it in class.

When you create a poll, it is automatically started (i.e., responses will be accepted), but not pushed. Somewhere in the flurry of setting switches I think I must have pushed some polls I didn’t intend to. I also noticed one morning as I was setting up polls that someone (listed as unregistered) had responded to a question I had created shortly before.   As far as I knew I hadn’t pushed the poll, so…?  The only explanation I can think of is that someone was responding to a different poll and texted the wrong number.  Anyway, as an extra precaution and also to catch any problems at the outset, I made the first question of the day a discussion question. Only one question shows at a time, so as long as the discussion question was up, none of the testing questions would be displayed.

Oops

One other thing to keep in mind is to check before asking a question that one hasn’t written the answer on the board. If the class suddenly goes very quiet and the responses come in as a flood, that’s probably what has happened.

Accommodating technology and life

Stuff happens. If a student misses class, he or she will also miss the questions and the points that could have been scored for answering them. If the absence is for an excusable reason (or even if it isn’t) a student might ask to make up the missed questions. As this would take the form of a one-on-one polling session, and the construction of a whole suite of new questions, I knew it was something I didn’t want to deal with.

One could simply not count the missed questions against the student’s grade, but that wasn’t a precedent I wanted to set either. Instead I stated in the syllabus that there would not be a make-up option, but that each student would have a 10-point “head start” for the Poll Everywhere questions. Whatever the student’s score at the end of the course, I added 10 points, up to a maximum of a 100% score. I had no idea how many questions I would be asking, so 10 points was just a guess, but it ended up covering the questions for one day’s absence, which is not unreasonable.

Another thing the 10 points was intended to do was offset any technological problems, like a student’s smart phone battery dying at an inopportune moment, or someone texting the wrong number by accident, or accidentally clicking the wrong box on the browser. The 10 points also covered miscalculations on my part, such as making a testing question too difficult.

I still ended up forgiving missed questions in two cases: one because of a scheduling conflict with another class, and the other on compassionate grounds.

The verdict

I will be teaching in September, and I plan to use Poll Everywhere again. Even if it happens that my classroom is outfitted with a receiver for clickers, I’ll still stay with Poll Everywhere.  For one, my questions are already set up, ready and waiting online. Another reason is the flexibility of being able to show a question without actually showing the poll (i.e., the window with questions and responses that the Poll Everywhere software creates). This started out as a “duct tape” fix for a technical problem, but in the end I think I prefer it because I have more control over what can fit on the slide. As far as I know, Turning Point questions (the usual clicker system) can’t be started unless the slide that will show the results is the current slide.

One more reason is that the system will be free for students to use, outside of whatever data charges they might incur. I will either cover the cost myself, or, if there is no Turning Point option, attempt to convince the school to do it. A plan exists where the students can pay to use the system, but I’d like to avoid that if possible. On the off chance that something goes horribly wrong and I can’t get it working again, I’d prefer to not have students wondering why they had to pay for a service that they can’t use.

Over all, I really like the idea of having a diagnostic tool for probing brains (also referred to as formative assessment, I think). I suppose my teaching process is similar to the one I use for debugging computer code: I perturb the system, observe the output, and use that to diagnose what the underlying problem might be. Poll Everywhere is not the only tool that can do this, but it is probably the one I will stick with.

Categories: Learning technologies | Tags: , , , , , | 1 Comment

The Poll Everywhere experiment: After day 3 of 15

Tech godsThis month I am teaching an introductory physical geology course that could be called “All you ever wanted to know about geology in 15 days.” It is condensed into the first quarter of the Spring term, and so compressed into 15 classes in May.

I decided to use an classroom response system this time. I like the idea of being able to peer into the black box that is my students’ learning process, and fix problems as they arise. I also like that I can challenge them with complex questions. Students get points for answering the really hard ones regardless of whether they get the right answer or not (and sometimes there is more than one reasonable answer).

Classroom response systems often involve the use of clickers, but my classroom doesn’t have a receiver, and I didn’t want to spend $250 to buy a portable one. Instead I decided to try Poll Everywhere. It is an online polling tool that can be used to present questions to students, collect their responses, display the frequency of each response, and, for a fee, tell me who answered what correctly.  An advantage of Poll Everywhere is that students can use the devices they already have to answer questions, either from a web browser or by sending text messages.

The obvious snag, that someone didn’t have the requisite technology, didn’t occur, and setting up the students was far easier than I thought it would be.  I’ve noticed that many are now texting their answers rather than using their browsers, even though most planned to use their browsers initially. None have asked for my help with getting set up for text messaging, and that would be an endorsement for any classroom technology in my books.

My experience with the service has not been as smooth. It is easy to create poll questions, but the window that pops up to show the poll isn’t as easy to read as I would like it to be. The main problem, however, is that I can’t actually show students the polls. Aside from one instance involving random button pushing that I haven’t been able to reproduce, the polls show up on my computer, but are simply not projected onto the screen at the front of the classroom. I’ve looked around online for a solution, but the only problem that is addressed is polls not showing up on PowerPoint slides at all, which is not my issue.  On the advice of Poll Everywhere I have updated a requisite app, but to no avail.

The work-around I’ve come up with is to make my own slides with poll questions and the possible responses. Normally, advancing to the slide on which the poll appears would trigger the poll. Instead I trigger and close the poll from my Poll Everywhere account using an iPad.  I haven’t yet tried exiting PowerPoint and showing the poll using the app, then going back to PowerPoint, because after I connect to the projector, I can’t seem to control the display other than to advance slides.

As a classroom tool, I have found the poll results to be useful already, and I was able to make some clarifications that I wouldn’t otherwise have known were necessary. I would like to look at the results in more detail to check on how each student is doing, but with all the time I’ve been spending on troubleshooting and building additional slides, I haven’t got to it yet.

It is possible that my technical problems are not caused by Poll Everywhere. All aspects of the polling system that are directly under their control have worked great. I’m curious whether I can get the polls to show up if I use a different projector, or whether other objects like videos would show on the projector I’m using now, but I have limited time to invest in experiments. This is where I’m supposed to say that I’ve learned my lesson and will henceforth test-drive new technology from every conceivable angle before actually attempting to use it in a way that matters. Only, I thought I had tested it: I ran polls in PowerPoint multiple times on my own computer, doing my best to find out what would make them not work and how to fix it. I also answered poll questions from a separate account using my computer, an iPad, and by texting to find out what the students’ experience would be and what challenges it might involve… but I never thought to check whether the projector would selectively omit the poll from the slide.  Who would have thought?

Categories: Learning technologies | Tags: , , , , , | 1 Comment

Blog at WordPress.com.