Friday, May 9, 2014

Assessment in Blendkit

Week 3 this time, and I have to say I was completely underwhelmed by the reading on assessment strategies.

The biggest issue was the lack of discussion at all for us physical scientists in the room.  How can you do any form of online assessment when you're dealing with highly technical material, often involving long derivations or problems where a single math error gets you a wrong multiple choice answer even if you knew the entire process?  Many years ago I wrote a quiz question engine that would randomize the numbers in an equation, which at least solves the "The answer for #4 is B" problem, but it still does a lousy job of helping those folks who just stumble instead of fall.  Back at Va. Tech when I had classes of 180 intro chem students I was forced to use MC in-class exams, but I always offered students the choice to take a "Write it all out" test with no prechosen answers, in which I'd offer partial credit.  5-6 would take me up on this, and they always did very well.  (Partly because they were always the best students...) For PChem I got some great advice from an older professor and just banned calculators during exams.  I don't care about the final number, I only care if you can get there.

The reading does point out that it's important that students feel comfortable with the tools needed to respond to the question and that they have access to the technical resources needed.  This is a huge issue for chemistry, organic and physical alike.  Short of scanning pages of handwriting the tools to write structures or equations online are going to be new to students and are often very complex and difficult to master.

Things like one sentence summaries and online discussions are helpful to an extent, but when the students themselves don't know the material it's almost impossible for them to judge the accuracy of the responses.  I saw this quite a bit in the original Stanford AI MOOC: there were a ton of folks who knew the material very well and could explain it, but there were also folks who tried and failed, misleading other students.  Generally they got downvoted, but not always.

The Coursera Game Programming in Python is the closest to an actual, functioning method I've seen yet.  Each assignment has a *very* detailed rubric with specific points for specific tasks.  Peers grade using this rubric and can add additional comments.  It's clear the instructors spent a ton of time developing these rubrics, and they are *far* more detailed than anything discussed in the readings.  As an alternative, the optional project way back in the AI MOOC also worked well- you had to decrypt a chunk of text and enter in the sentence.  Self checking- a minor error is obvious, and a major problem doesn't let you get anything at all.

Then again, perhaps I expect too much.  Assessment is really, really hard.  I always get an ulcer trying to deal with it.

No comments: