Musings on Instructional Technology
So you've got the enhanced classroom, the laptop, the software, the students... Now what?
Friday, May 30, 2014
Week 5- Blendkit conclusion
The one bit that hit me this week was the comments on how the biggest impact on quality of courses is good faculty training programs, much of it based on peer-mentoring and evaluation. For those of us at Small Liberal Arts Colleges, we don't have that (yet). We're in the process of building up a group of faculty fellows who will be required to mentor the next batch, but in terms of peer-mentoring we're currently lacking. We have a couple of faculty who have done a lot in terms of flipping so we have some experience as well as a small bit of knowledge within IT on best practices, but we're having to bootstrap most of it. The materials within Blendkit are mostly designed for partly-online course that we won't be running, and we've had limited success in terms of detailed evaluation plans with faculty beyond the always mostly useless student evals.
The faculty fellows will help to some degree, but they are few and the various departments are many. The classes proposed by faculty in Spanish, Education, Middle-East/Islamic Studies and Chemistry, for example, span a huge range of techniques, course types and learning styles. I just hope we'll be able to figure out a way to measure how well their various ideas worked out...
Monday, May 19, 2014
Blendkit week 4
Time to be a curmudgeon and summarize this week's reading
Seriously, that's about the entirety of the chapter. I'm honestly unsure what I'm supposed to gain from it, other than the fun of actually seeing Second Life mentioned multiple times in a table from 2007 and getting to read sentences like "However, it is important to remember that it is the manner of implementation, rather than any affordance of the modality itself, that will bring about this perceived consistency."
This is one of my major annoyances about much of the educational literature: without really good ways to assess the results, it tends to degenerate into statements of the blindingly obvious, tables of activity types matched to somewhat nebulous learning types (often changing between various articles) and all of it covered up with florid writing. Assessment is incredibly hard since you can't pry open someone's brain to see what they actually have learned, so very often you end up with feedback based on student evaluations, such as the comments from the Aycock et al. reference where the students complained that the online and F2F portions weren't integrated enough. How so? Reading the original article sheds little extra light, other than commenting that the instructors' inexperience was more of a problem than the blending. Even given the students' complaints, do we know if they did any better in the poorly integrated blended class than previously? Or did they do worse than in better integrated, later courses? The article claims that the faculty saw that "Student interactivity increased", "Student performance improved" and "[students] could accomplish course goals that they couldn't in their traditional course". Data for any of this? If so, the article provides none of it- if I write a paper for a chemistry journal claiming my new synthesis improves yield and enantiomeric purity, I better have some numbers backing me up.
Ok, grumping over. Given that I have nothing to add to actually *solving* the problem I can't hammer at those who are trying for very long.
- Your online and in-class components should be well integrated.
- There are a number of different ways/technologies that will work.
Seriously, that's about the entirety of the chapter. I'm honestly unsure what I'm supposed to gain from it, other than the fun of actually seeing Second Life mentioned multiple times in a table from 2007 and getting to read sentences like "However, it is important to remember that it is the manner of implementation, rather than any affordance of the modality itself, that will bring about this perceived consistency."
This is one of my major annoyances about much of the educational literature: without really good ways to assess the results, it tends to degenerate into statements of the blindingly obvious, tables of activity types matched to somewhat nebulous learning types (often changing between various articles) and all of it covered up with florid writing. Assessment is incredibly hard since you can't pry open someone's brain to see what they actually have learned, so very often you end up with feedback based on student evaluations, such as the comments from the Aycock et al. reference where the students complained that the online and F2F portions weren't integrated enough. How so? Reading the original article sheds little extra light, other than commenting that the instructors' inexperience was more of a problem than the blending. Even given the students' complaints, do we know if they did any better in the poorly integrated blended class than previously? Or did they do worse than in better integrated, later courses? The article claims that the faculty saw that "Student interactivity increased", "Student performance improved" and "[students] could accomplish course goals that they couldn't in their traditional course". Data for any of this? If so, the article provides none of it- if I write a paper for a chemistry journal claiming my new synthesis improves yield and enantiomeric purity, I better have some numbers backing me up.
Ok, grumping over. Given that I have nothing to add to actually *solving* the problem I can't hammer at those who are trying for very long.
Friday, May 9, 2014
Assessment in Blendkit
Week 3 this time, and I have to say I was completely underwhelmed by the reading on assessment strategies.
The biggest issue was the lack of discussion at all for us physical scientists in the room. How can you do any form of online assessment when you're dealing with highly technical material, often involving long derivations or problems where a single math error gets you a wrong multiple choice answer even if you knew the entire process? Many years ago I wrote a quiz question engine that would randomize the numbers in an equation, which at least solves the "The answer for #4 is B" problem, but it still does a lousy job of helping those folks who just stumble instead of fall. Back at Va. Tech when I had classes of 180 intro chem students I was forced to use MC in-class exams, but I always offered students the choice to take a "Write it all out" test with no prechosen answers, in which I'd offer partial credit. 5-6 would take me up on this, and they always did very well. (Partly because they were always the best students...) For PChem I got some great advice from an older professor and just banned calculators during exams. I don't care about the final number, I only care if you can get there.
The reading does point out that it's important that students feel comfortable with the tools needed to respond to the question and that they have access to the technical resources needed. This is a huge issue for chemistry, organic and physical alike. Short of scanning pages of handwriting the tools to write structures or equations online are going to be new to students and are often very complex and difficult to master.
Things like one sentence summaries and online discussions are helpful to an extent, but when the students themselves don't know the material it's almost impossible for them to judge the accuracy of the responses. I saw this quite a bit in the original Stanford AI MOOC: there were a ton of folks who knew the material very well and could explain it, but there were also folks who tried and failed, misleading other students. Generally they got downvoted, but not always.
The Coursera Game Programming in Python is the closest to an actual, functioning method I've seen yet. Each assignment has a *very* detailed rubric with specific points for specific tasks. Peers grade using this rubric and can add additional comments. It's clear the instructors spent a ton of time developing these rubrics, and they are *far* more detailed than anything discussed in the readings. As an alternative, the optional project way back in the AI MOOC also worked well- you had to decrypt a chunk of text and enter in the sentence. Self checking- a minor error is obvious, and a major problem doesn't let you get anything at all.
Then again, perhaps I expect too much. Assessment is really, really hard. I always get an ulcer trying to deal with it.
The biggest issue was the lack of discussion at all for us physical scientists in the room. How can you do any form of online assessment when you're dealing with highly technical material, often involving long derivations or problems where a single math error gets you a wrong multiple choice answer even if you knew the entire process? Many years ago I wrote a quiz question engine that would randomize the numbers in an equation, which at least solves the "The answer for #4 is B" problem, but it still does a lousy job of helping those folks who just stumble instead of fall. Back at Va. Tech when I had classes of 180 intro chem students I was forced to use MC in-class exams, but I always offered students the choice to take a "Write it all out" test with no prechosen answers, in which I'd offer partial credit. 5-6 would take me up on this, and they always did very well. (Partly because they were always the best students...) For PChem I got some great advice from an older professor and just banned calculators during exams. I don't care about the final number, I only care if you can get there.
The reading does point out that it's important that students feel comfortable with the tools needed to respond to the question and that they have access to the technical resources needed. This is a huge issue for chemistry, organic and physical alike. Short of scanning pages of handwriting the tools to write structures or equations online are going to be new to students and are often very complex and difficult to master.
Things like one sentence summaries and online discussions are helpful to an extent, but when the students themselves don't know the material it's almost impossible for them to judge the accuracy of the responses. I saw this quite a bit in the original Stanford AI MOOC: there were a ton of folks who knew the material very well and could explain it, but there were also folks who tried and failed, misleading other students. Generally they got downvoted, but not always.
The Coursera Game Programming in Python is the closest to an actual, functioning method I've seen yet. Each assignment has a *very* detailed rubric with specific points for specific tasks. Peers grade using this rubric and can add additional comments. It's clear the instructors spent a ton of time developing these rubrics, and they are *far* more detailed than anything discussed in the readings. As an alternative, the optional project way back in the AI MOOC also worked well- you had to decrypt a chunk of text and enter in the sentence. Self checking- a minor error is obvious, and a major problem doesn't let you get anything at all.
Then again, perhaps I expect too much. Assessment is really, really hard. I always get an ulcer trying to deal with it.
Thursday, May 1, 2014
Dusting off the blog for BlendKIT
A couple of years between blog posts never hurt anyone, right?
It's now week two for the Blendkit MOOC and as usual we're being asked for a posting on our reflections of the reading this week. Rather than thinking about it in the context of my job, while reading I was wondering how my learning guitar fits into the structure that was discussed.
In many ways, I'm sort of creating my own "Blended" course- most of the content in learning is already online in the form of detailed websites and videos that cover learning guitar from step one to playing Classical Gas fluently. Periodically, I sign up for in-person lessons with a nearby instructor. This method lacks a central "professor", instead relying on a variety of experts such as Justin Sandercoe who all cover slightly different areas and the local instructor to tell me if I'm doing something stupid.
This probably is closest to the Concierge model- the online teachers will point out the things I don't know and then guide me through some exercises to learn the technique, then give some songs to practice with. Once you understand the idea, it's possible to go out and find good exemplar songs to learn- "This solo is basically a set of c-minor arpeggios", and so forth. Periodically the in-person instructor can point out specific flaws in my technique and suggest additional areas to work on. Typically I'll go through a set of designed lessons and then spend a while simply learning music that I feel like, then return to a set of lessons to work on another specific area. The combination of online and in-person stuff fits the Concierge model reasonably closely.
The biggest problem in my current learning is that I lack a good structure for conversation with other students at my rough level. Forum postings are possible as an asynchronous method, but for music instant feedback from others would be valuable. But it's difficult to find folks who are where I am- one of my coworkers invited me to play with him, but he's a semi-professional with decades of experience playing out. I'd merely hold him back. It's a powerful reminder of how important the face-to-face, peer-to-peer interactions in learning are; something I need to remember when I'm the guy in front of the class.
It's now week two for the Blendkit MOOC and as usual we're being asked for a posting on our reflections of the reading this week. Rather than thinking about it in the context of my job, while reading I was wondering how my learning guitar fits into the structure that was discussed.
In many ways, I'm sort of creating my own "Blended" course- most of the content in learning is already online in the form of detailed websites and videos that cover learning guitar from step one to playing Classical Gas fluently. Periodically, I sign up for in-person lessons with a nearby instructor. This method lacks a central "professor", instead relying on a variety of experts such as Justin Sandercoe who all cover slightly different areas and the local instructor to tell me if I'm doing something stupid.
This probably is closest to the Concierge model- the online teachers will point out the things I don't know and then guide me through some exercises to learn the technique, then give some songs to practice with. Once you understand the idea, it's possible to go out and find good exemplar songs to learn- "This solo is basically a set of c-minor arpeggios", and so forth. Periodically the in-person instructor can point out specific flaws in my technique and suggest additional areas to work on. Typically I'll go through a set of designed lessons and then spend a while simply learning music that I feel like, then return to a set of lessons to work on another specific area. The combination of online and in-person stuff fits the Concierge model reasonably closely.
The biggest problem in my current learning is that I lack a good structure for conversation with other students at my rough level. Forum postings are possible as an asynchronous method, but for music instant feedback from others would be valuable. But it's difficult to find folks who are where I am- one of my coworkers invited me to play with him, but he's a semi-professional with decades of experience playing out. I'd merely hold him back. It's a powerful reminder of how important the face-to-face, peer-to-peer interactions in learning are; something I need to remember when I'm the guy in front of the class.
Wednesday, September 8, 2010
Rant time 2; SMART
So we have a new professor who's interested in doing neat things in Music Ed. He'd like to use a SMART board or some similar interface to be able to mark up music and take notes. Great.
The physical layout of the room makes a touch screen or standalone unit not really viable, but SMART markets a wireless slate that he's really excited about using. We order one and install it and get ready for interactive goodness.
Pen works, highliter works, camera does the neat little snapshot animation. Hmm, the Notebook icon doesn't seem to be working. That's a problem- without that you can't save your files, edit them, put up backgrounds (like musical staves) and so forth- the device is close to useless without it. So I go to the SMART site and try and download the software.
"This product is not eligible"
I spent a while going around in circles with tech support yesterday- the Notebook software is *not* included because, quote "People are using the Notebook software with non-SMART hardware"
Hunh? I just bought a $400 piece of SMART hardware and I want to actually, you know, use it. Instead, I'm being told that I can only use it if I purchase yet another piece of SMART hardware that does come with a license. If you read carefully the web page you'll find hints of that, but given that every other piece of SMART hardware comes with the Notebook software it's not exactly what you might expect.
Anyone know another good vendor, because I'm not so sure these folks are very SMART.
The physical layout of the room makes a touch screen or standalone unit not really viable, but SMART markets a wireless slate that he's really excited about using. We order one and install it and get ready for interactive goodness.
Pen works, highliter works, camera does the neat little snapshot animation. Hmm, the Notebook icon doesn't seem to be working. That's a problem- without that you can't save your files, edit them, put up backgrounds (like musical staves) and so forth- the device is close to useless without it. So I go to the SMART site and try and download the software.
"This product is not eligible"
I spent a while going around in circles with tech support yesterday- the Notebook software is *not* included because, quote "People are using the Notebook software with non-SMART hardware"
Hunh? I just bought a $400 piece of SMART hardware and I want to actually, you know, use it. Instead, I'm being told that I can only use it if I purchase yet another piece of SMART hardware that does come with a license. If you read carefully the web page you'll find hints of that, but given that every other piece of SMART hardware comes with the Notebook software it's not exactly what you might expect.
Anyone know another good vendor, because I'm not so sure these folks are very SMART.
Rant time1; Microsoft
What is it with companies and interesting interfaces?
A couple of professors here are interested in using Microsoft Surface. Looks like a neat idea. Check price: Commercial version $12500. Ouch. But they're developers- we should get a discount, right? MS will sell developers a table for $2500 *more* than the commercial version. I'm not sure I can publish the academic pricing discount, but let's just say it's pretty skimpy. As in, I spend more on a trip to the grocery store than the academic discount is for a $15000 item. Do they care about this product at all? I remember back when NT 4 came out and seeing a student package of Visual Studio for every language plus a full version of NT 4.0 for $90- it’s the day I knew OS/2 was dead, since the equivalent boxes of OS/2 software sat next to it and cost over $1000.
I thought the chant was "Developers! Developers! Developers"
A couple of professors here are interested in using Microsoft Surface. Looks like a neat idea. Check price: Commercial version $12500. Ouch. But they're developers- we should get a discount, right? MS will sell developers a table for $2500 *more* than the commercial version. I'm not sure I can publish the academic pricing discount, but let's just say it's pretty skimpy. As in, I spend more on a trip to the grocery store than the academic discount is for a $15000 item. Do they care about this product at all? I remember back when NT 4 came out and seeing a student package of Visual Studio for every language plus a full version of NT 4.0 for $90- it’s the day I knew OS/2 was dead, since the equivalent boxes of OS/2 software sat next to it and cost over $1000.
I thought the chant was "Developers! Developers! Developers"
Thursday, March 25, 2010
On failure
One thing I'm realizing from the NITLE summit so far is that the talks focused on failure are generally much more effective than those of successes. First, they tend to be funny- it's easier to poke fun at yourself when you look foolish. Second, it's easier to draw parallels in failures- the major successes, with all sorts of structure, collaboration, planning and assessment attached to it can seem awfully daunting when you haven't started down the path, especially when you know the stars won't align as well in your situation. But everyone has some terrible story to tell, and learning how to work around the issues ends up being the real interesting information. Finally, it reinforces the idea that failure is sometimes an option- when you try new things it doesn't always work, but that's not necessarily a bad thing. Experiments often fail- it's part of life even if we don't want to admit that we're the ones failing, and it shows ways to bets cut your losses and move on to something new.
I love failure...
I love failure...
Subscribe to:
Posts (Atom)