April 21, 2009
What makes good qualititative research?
The debate between "qualitative" and.... non-qualitative (it's not all "quantitative"!) research has been going on for many many years. Qualitative research includes ethnography and various methods based on interview and detailed field observation, often of a relatively small number of cases. Typically, qualitative research eschews the more traditional approach to scientific research, described by King, Keohane and Verba (Designing Social Inquiry (1994)):
start out with clear, theoretically anchored hypotheses, pick a sample that will let you test those ideas, and use a pre-specified method of systematic analysis to see if they are right.
Quals claim their work is underappreciated and underfunded; non-quals criticize qualitative work as "unrigorous, unreplicable, unfalsifiable" (John Comaroff, in Michèle Lamont and Patricia White, Workshop on Interdisciplinary Standards for Systematic Qualitative Research (Washington: National Science Foundation, 2009), available at http://www.nsf.gov/sbe/ses/soc/ISSQR_workshop_rpt.pdf, p. 37.)
Howard S. Becker, one of the leading qualitative sociologists, recently wrote an essay elucidating this debate, and offering some criteria for good qualitative research. He bases it on a review of two NSF reports released (one in March 2009) on the use of qualitative methods. (This is the same Becker known to many of as as the author of Writing for Social Scientists.)
I enjoyed reading this, as someone who has long struggled to understand what criteria are useful for judging whether qualitative research is "good" or not. What constitutes a contribution to knowledge? While Becker's criteria, unavoidably, are a bit, well, qualitative, he offers specific characteristics to look for, and I find his list convincing, at least as a set of necessary conditions, if not sufficient.
My main beef of comes down to this: Qualitative scholars often describe their work as "exploratory", and sometimes that it's purpose is to generate "grounded theory". I'm all for creative insights and hypothesizing. But how much of a contribution to knowledge is it -- especially if the hypotheses can't even stand alone as rigorously true logical deductions (which may be surprising and enlightening on their won) -- if no one ever follows up the exploratory hypothesis generation to actually test, with reliable methods, whether those hypotheses are more or less supported by sufficient, and sufficiently controlled evidence to change our priors?
April 07, 2009
I've been running into some discussions about "connectivist teaching". The term apparently was coined by George Siemens . Siemens and other refer to it as a "learning" theory, but Plon Verhagen points out that it is not so much a theory about how people learn, as it is a method of pedagogy for the digital age.
The central idea seems to be to teach through a process of having the learner build a network of nodes and connections, drawing together various resources and various ideas. In practice, the focus seems to be on "know-where" instead of "know-how" or "know-what": learning where to find information, and how to move through a network of varied sources, assessing quality and reliability as you go.
Here is a simple class project presented as a YouTube video that illustrates the practice.
 ''Connectivism: A Learning Theory for the Digital Age'', International Journal of Instructional Technology and Distance Learning, Vol. 2 No. 1, Jan 2005