Friday, May 30, 2014

Week 5- Blendkit conclusion


The one bit that hit me this week was the comments on how the biggest impact on quality of courses is good faculty training programs, much of it based on peer-mentoring and evaluation.  For those of us at Small Liberal Arts Colleges, we don't have that (yet).   We're in the process of building up a group of faculty fellows who will be required to mentor the next batch, but in terms of peer-mentoring we're currently lacking.  We have a couple of faculty who have done a lot in terms of flipping so we have some experience as well as a small bit of knowledge within IT on best practices, but we're having to bootstrap most of it.  The materials within Blendkit are mostly designed for partly-online course that we won't be running, and we've had limited success in terms of detailed evaluation plans with faculty beyond the always mostly useless student evals.

The faculty fellows will help to some degree, but they are few and the various departments are many.  The classes proposed by faculty in Spanish, Education, Middle-East/Islamic Studies and Chemistry, for example, span a huge range of techniques, course types and learning styles.    I just hope we'll be able to figure out a way to measure how well their various ideas worked out...

Monday, May 19, 2014

Blendkit week 4

Time to be a curmudgeon and summarize this week's reading
  • Your online and in-class components should be well integrated.
  • There are a number of different ways/technologies that will work.
Ok, class dismissed.

Seriously, that's about the entirety of the chapter.  I'm honestly unsure what I'm supposed to gain from it, other than the fun of actually seeing Second Life mentioned multiple times in a table from 2007 and getting to read sentences like "However, it is important to remember that it is the manner of implementation, rather than any affordance of the modality itself, that will bring about this perceived consistency."

This is one of my major annoyances about much of the educational literature: without really good ways to assess the results, it tends to degenerate into statements of the blindingly obvious, tables of activity types matched to somewhat nebulous learning types (often changing between various articles) and all of it covered up with florid writing.  Assessment is incredibly hard since you can't pry open someone's brain to see what they actually have learned, so very often you end up with feedback based on student evaluations, such as the comments from the Aycock et al. reference where the students complained that the online and F2F portions weren't integrated enough.  How so?  Reading the original article sheds little extra light, other than commenting that the instructors' inexperience was more of a problem than the blending.  Even given the students' complaints, do we know if they did any better in the poorly integrated blended class than previously?  Or did they do worse than in better integrated, later courses?  The article claims that the faculty saw that "Student interactivity increased", "Student performance improved" and "[students] could accomplish course goals that they couldn't in their traditional course".  Data for any of this?  If so, the article provides none of it- if I write a paper for a chemistry journal claiming my new synthesis improves yield and enantiomeric purity, I better have some numbers backing me up.

Ok, grumping over.  Given that I have nothing to add to actually *solving* the problem I can't hammer at those who are trying for very long.

Friday, May 9, 2014

Assessment in Blendkit

Week 3 this time, and I have to say I was completely underwhelmed by the reading on assessment strategies.

The biggest issue was the lack of discussion at all for us physical scientists in the room.  How can you do any form of online assessment when you're dealing with highly technical material, often involving long derivations or problems where a single math error gets you a wrong multiple choice answer even if you knew the entire process?  Many years ago I wrote a quiz question engine that would randomize the numbers in an equation, which at least solves the "The answer for #4 is B" problem, but it still does a lousy job of helping those folks who just stumble instead of fall.  Back at Va. Tech when I had classes of 180 intro chem students I was forced to use MC in-class exams, but I always offered students the choice to take a "Write it all out" test with no prechosen answers, in which I'd offer partial credit.  5-6 would take me up on this, and they always did very well.  (Partly because they were always the best students...) For PChem I got some great advice from an older professor and just banned calculators during exams.  I don't care about the final number, I only care if you can get there.

The reading does point out that it's important that students feel comfortable with the tools needed to respond to the question and that they have access to the technical resources needed.  This is a huge issue for chemistry, organic and physical alike.  Short of scanning pages of handwriting the tools to write structures or equations online are going to be new to students and are often very complex and difficult to master.

Things like one sentence summaries and online discussions are helpful to an extent, but when the students themselves don't know the material it's almost impossible for them to judge the accuracy of the responses.  I saw this quite a bit in the original Stanford AI MOOC: there were a ton of folks who knew the material very well and could explain it, but there were also folks who tried and failed, misleading other students.  Generally they got downvoted, but not always.

The Coursera Game Programming in Python is the closest to an actual, functioning method I've seen yet.  Each assignment has a *very* detailed rubric with specific points for specific tasks.  Peers grade using this rubric and can add additional comments.  It's clear the instructors spent a ton of time developing these rubrics, and they are *far* more detailed than anything discussed in the readings.  As an alternative, the optional project way back in the AI MOOC also worked well- you had to decrypt a chunk of text and enter in the sentence.  Self checking- a minor error is obvious, and a major problem doesn't let you get anything at all.

Then again, perhaps I expect too much.  Assessment is really, really hard.  I always get an ulcer trying to deal with it.

Thursday, May 1, 2014

Dusting off the blog for BlendKIT

A couple of years between blog posts never hurt anyone, right?

It's now week two for the Blendkit MOOC and as usual we're being asked for a posting on our reflections of the reading this week.  Rather than thinking about it in the context of my job, while reading I was wondering how my learning guitar fits into the structure that was discussed.

In many ways, I'm sort of creating my own "Blended" course- most of the content in learning is already online in the form of detailed websites and videos that cover learning guitar from step one to playing Classical Gas fluently.  Periodically, I sign up for in-person lessons with a nearby instructor.  This method lacks a central "professor", instead relying on a variety of experts such as Justin Sandercoe who all cover slightly different areas and the local instructor to tell me if I'm doing something stupid.

This probably is closest to the Concierge model- the online teachers will point out the things I don't know and then guide me through some exercises to learn the technique, then give some songs to practice with.  Once you understand the idea, it's possible to go out and find good exemplar songs to learn- "This solo is basically a set of c-minor arpeggios", and so forth.  Periodically the in-person instructor can point out specific flaws in my technique and suggest additional areas to work on.  Typically I'll go through a set of designed lessons and then spend a while simply learning music that I feel like, then return to a set of lessons to work on another specific area.  The combination of online and in-person stuff fits the Concierge model reasonably closely.

The biggest problem in my current learning is that I lack a good structure for conversation with other students at my rough level.  Forum postings are possible as an asynchronous method, but for music instant feedback from others would be valuable.  But it's difficult to find folks who are where I am- one of my coworkers invited me to play with him, but he's a semi-professional with decades of experience playing out.  I'd merely hold him back.  It's a powerful reminder of how important the face-to-face, peer-to-peer interactions in learning are; something I need to remember when I'm the guy in front of the class.