Here’s an idea I’ve been mulling over for the past week. (Mulling over enough to have spent many hours in an abortive attempt to create a prototype. But even simple web application programming seems to be a nontrivial problem that requires more time investment than I can muster right now). I have a large bank of questions for some of the courses I teach. These are mostly essay questions (short and long) of varying degrees of complexity, that ask students to make arguments, provide evidence, point to real-world examples, etc. I usually select the questions for final exams or other forms of assessment from this pool; a student who could answer all of them well would have essentially mastered the content of the course.
I would like to have a web application that draws on this pool of questions to do this:
- In “quiz mode” a student would either select a question or be served a random question from the pool, and then he/she would answer it. They could then move on to a different question, for as long as they wished. (Perhaps the questions could be served in such a way that students can vote for the questions they most want answered, as in Google Moderator). The software would also allow the students to rate both the quality of their own answers (how good they think their answer is) and/or their level of confidence in their answer (how certain they are that they have a good answer), as well as the level of difficulty of the question. Their answers and ratings would go into a database; as they accumulate, the instructor could see which questions are rated as “hard” by students, or display characteristic problems, and focus teaching efforts there.
- In “rating mode” students would either select a question or be served a random question from the pool, which they would view along with any (anonymized) answers from themselves, other students in the course, or even the instructor. They could then vote on which answer is best (if there is more than one answer) or rate the quality of existing answers. Perhaps they could also comment or edit existing answers if they want to add something to them, or feel a correction is in order. As these ratings accumulate, students would get a better sense of what counts as a good or a bad answer (assuming the “wisdom of crowds” works its magic; the courses I have in mind for this sort of application have around 100 students, which seems like it would be enough).
- In “asking mode,” the application would allow students to submit questions, which then would go into the pool and could be answered by other students in the class. (Administrators could edit the questions for clarity or reject questions that are not sufficiently related to the topic of the course).
- The application could even have something like the reputation management features of the StackExchange family of sites. Students who answer questions would gain reputation “points,” so long as their answers are rated relatively highly (fewer points for unrated or lower-rated answers); asking questions or rating answers would also get them some reputation points, though fewer. (For purely illustrative purposes, imagine that asking a question nets you 2 reputation points, so long as it is not rejected by the instructor, rating an answer nets you 1 point, and answering a question nets you between 5 and 10 points, depending on how highly rated your answers are). Perhaps these reputation points could be translated into actual grade points at the end of the term in accordance with some appropriate formula, though that would depend on the design of the course.
As I imagine it, an application like that would offer students extensive practice in writing, especially if combined with say, a requirement that they answer at least one question every week (in fact, this system could displace one of the traditional two essays we ask students to complete in many courses). It would also help them practice the entire material covered in the course: since the questions for the final exam would be drawn from the pool (or be very similar to some of the questions there) students who use the tool would be essentially studying for the final every time they use it. (“Quizzing” yourself is one of the most useful study techniques available, and the system would be designed so that you would get relatively quick, and eventually accurate, feedback on your answers, without the instructor having to grade hundreds of essays). And I (the instructor) would in turn get feedback on how well they understand the material, as well as on what aspects of the course they are having difficulty with.
What do people think? What problems would you foresee emerging with a system like this?
As far as I can tell, nothing quite like that exists, though in some ways this would be like a private version of StackExchange, seeded with a pool of questions on some specific course topic and open only to people taking the course. Google Moderator has some useful features, and I’ve been thinking about using it as a study tool for students in my course this term, but it would not be fully integrated with the rest of the assessment in the way I would want. Or is there something out there that I’m missing? How difficult would it be to develop the system I've described above?