Tagged: research methods

4. On Choosing Methods; Statistics are giving me a mild Kurtosis

During our last class meeting, John mentioned that the excuse reason most give as a response in reference to their inability to calculate numbers is that they just kind of don’t think in that special, uniquely logical way that lends itself to crunching numbers. Never have. And never will think in that way. John noted that this is a pure fallacy; if you can set up a microwave you too can calculate. But would you believe that last week I tried to set up a newly Best Bought microwave which had the most unnerving, bite-your-fingernails-in frustration kind of directions? (I also am utterly unable to perform any kind of math that transcends algebra– so go figure.)

BUT: Our readings this week (viz. Charney) have done an exquisite job unpacking the bad rap qualitative methods have received in academia. Especially as they are treated in the explicitly unscientific realms. While this demystification of qualitative studies was elucidating, I found Sullivan and Porter’s attempt to give a phronetic and heuristic treatment to theory, practice, and method a little less thoroughgoing. Especially when it came to their discussion of method.

Prior to this post I was really only (consciously) aware of two research methods: ethnography and census. Sullivan spends a lot time writing about the lofty multimodality method, method-driven research, and problem-driven research, which, of course, also involves the opportunity to appropriate a kind of method, but spends little time discussing the actual method of choosing a method. She writes, “…Methods are given procedures, well-established and trustworthy bases for observing practice, and that properly applying method to practice can help us verify or generate models and theories” (307). Yes, methods complicate theories, test their limits, and help us to easier gain access to knowledge that would otherwise be unavailable through theory, but how do we know exactly which method to apply? In other words, is there a theory that helps deduce which method to choose, or is the choice really only a matter of sheer common sense?

If I had to take a stab at guessing, I’d say the first question to help refine the choice is qualitative vs. quantitative question. Common sense reigns here: If there are numbers involved, quantitative, if you’re interested in researching non-numeric data, qualitative. (This all sounds base, but I’m trying to think/write through this). After this front of questioning, I’m a bit hazy. I think as students with substantial backgrounds in all things literary, qualitative research seems completely innocuous. Methodology re quantitative certainly seems a bit more intimidating.  I found here and here  an aesthetically amateurish, but informative glossary of research methods that has helped clarify, for me at least, what options are available (And with our research proposals due early next month it might not be a bad idea to brush up on some of these methods– for me, at least).

Maybe critiquing this essay, griping that Sullivan, in an effort to exact some heuristic humanity to research, doesn’t closely examine which methods lend themselves to phronetic study is unfair. That wishing her to do so distastefully expects the essay to transcend the scope to which it has been re(de)fined. But deciding to include choice, as a heuristic element, as a serious determiner of how to establish and exercise a successful praxis via research should include what choices we should make as researchers in regards to how to choose a method.  To make this even more complicated, Sullivan quotes E. W Eisner and A. Peshkin as they write that, “What constitutes a problem is not independent of the methods one knows how to use. Few of us seek problems we have no skill in addressing. What we know how to do is what we usually try to do” (308). Does this mean that the more methods we’re aware of means the more problems we’d be not only more likely to address, but to recognize in the first place? If so, and in the meantime until our proposals, I’ll be brushing up on the quality quantity of my methods.

Sullivan, Patricia and James E. Porter. “On Theory, Practice and Method: Toward a Heuristic Research Methodology for Professional Writing” Central Works in Technical Communication. Johndan Johnson-Eilola and Stuart A. Selber, eds. New York: Oxford University Press, 2004. 300-13. Print.

Separated at Birth: Theoretical Sampling Methods and Methodology as Explicitly Stated Praxis

I am quite the novice when it comes to research methods. I never took a research methods course as an undergraduate, and I have not yet taken a research methods class in graduate school (and judging by one of Eric’s comments, from his blog post, I may not have the time or funding to take such a class at WVU). Thus it pleases me to see some introductory texts to qualitative and quantitative methodological concepts amongst this week’s readings, like Koerber and McMichael’s “primer” on “Qualitative Sampling Methods” (454) and Slavin’s “A Practical Guide to Statistics”. It seems particularly important that technical communicators, or at least those people, like me, wishing to become technical communicators, learn more about research methods since the traditional view of the field, as Lay might argue, is in a similar stratosphere as science due to its affiliation “with the quantitative and objective scientific method” (151).

I also noticed a couple of similarities between two of the “methods” discussed in this week’s readings. The theoretical sampling method the Koerber and McMichael paper introduced to me (465) appeared much like the “method as explicitly argued praxis” from the Sullivan and Porter work (310). “Praxis”, according to Sullivan and Porter, is a “practical rhetoric” (305). When praxis is applied to research (and I’m assuming that Sullivan and Porter mean both qualitative and quantitative research), researchers acknowledge that multiple community frameworks/ideologies, and not just the one implied by the use of one methodology, are applicable to their studies. How in actuality, though, researchers succeed in not only “accept[ing] that the methods [they] normally choose to use provide powerful filters through which [they] view the world” (Sullivan and Porter, 310), but “become critical of methodology” (311) and apply praxis, comes about in two ways, as far as I can tell. Researchers alter their methodologies according to the context of their situations, and they explicitly tell the readers that they veered from a single methodological structure when composing their research.

Koerber and McMichael’s theoretical sampling method, though discussed by those authors as a sampling practice for qualitative research, seems to rely on the context of a study – “sampling emerge[s] along with the [researcher’s] study itself” (465). Sampling methods change as a study moves along, just as it should in “method explicitly argued as practice”.

In general, the two ideas matched up fairly well. However, when I took a second look at the theoretical sampling method, I realized that it did not resemble methodology as praxis as closely as I had imagined. The main reason I noticed a difference between the two theories was that their critical stances seemed much more dissimilar than I had initially thought.

Koerber and McMichael take a relatively casual stance towards critical research. During research, when a general direction for data starts to formalize, those who employ theoretical sampling methods will look for data that disproves any early assumptions (Koerber and McMichael 465). As I already mentioned, Sullivan and Porter hope that researchers become “critical of methodology”, but they can do this a few ways. One of them, as posited by Sullivan and Porter is to simply refuse to follow a method strictly by the numbers. That is not to say that researchers can be critical just by haphazardly changing the courses of their studies; the contexts of their situations must demand, as they usually do, that researchers stray from the methodological norm (Sullivan and Porter 308). Stated less simply, a researcher can be critical of methodology by “see[ing] the activity [of following the generally accepted framework for a methodology] as at least in part ‘constructing methodology’” (Sullivan and Porter 311).

That last quote, though, shows that the two ideas/methods/theories, while not quite identitical twins, are still closely related. By using a phrase like “constructing methodology”, Sullivan and Porter stress that all researchers are implicit in the social construction of the conventions of methodologies (308). Furthermore, the socially constructed ideas implicit in the term “methodology” can be subsumed under the umbrella term of theory – Sullivan and Porter outright say that view “research methodologies as theories”. Then again, Koerber and McMichael might think of things the same way: they write that sampling changes according to new data directions, but also that researchers “[adjust] the theory [they’re employing] according to trends that appear in the data” as well (465), treating the term “theory” like a synonym for “sampling method”.

Works Referenced

Koerber and McMichael reading

Lay, Mary M. “Feminist Theory and The Redefinition of Technical Communication.” Central Works in Technical Communication. Eds. Johndan Johnson-Eilola and Stuart A. Selber. New York: Oxford University Press, 2004. 146-159. Print.

Porter, James E., and Patricia Sullivan. “On Theory, Practice, and Method: Toward a Heuristic Research Methodology for Professional Writing.” Central Works in Technical Communication. Eds. Johndan Johnson-Eilola and Suart A. Selber. New York: Oxford, 2004. 300-313. Print.

Slavin reading

4: Up On a Pedestal—Qualitative or Quantitative?

The assumption underlying the treatment of qualitative sampling in many current technical communication texts seems to be that qualitative research can be interesting to readers in our own field but that it will never carry weight with larger audiences outside our field because its findings cannot be generalized in the same way that those of quantitative research can. We believe that this imbalance—an imbalance that seems to persist despite researchers’ calls for complementarity between quantitative and qualitative approaches—is at the heart of the gap that exists in regard to qualitative sampling.

Koerber’s conclusion (470) struck me as particularly illustrative of at least one of this week’s themes: the binary between quantitative and qualitative. I feel the need to play Devil’s advocate, however, by arguing that his conclusion seems too general. Surely, there are more people out there—scholars, theorists, researchers, scientists, writers—who believe there is more good in the complementarity of qualitative and quantitative studies than there is in forcing a gap between the two. Surely, even the most practically minded scientist would admit that qualitative studies only add to their quantitative work, opening new avenues of research, or at the very least provoking new questions to be explored.

I was also struck, as I have been with some of our other readings, by these authors’ and their colleagues’ reiterations of how difficult some technical writing concepts are to teach: “But when they discussed qualitative research techniques such as interviews, focus groups, ethnography, and textual analysis, they did not describe procedures for sampling systematically or provide terms to help novice researchers become familiar with qualitative sampling. … Although our field has many examples of excellent qualitative research, this research lacks a consistent vocabulary for sampling methods, so teaching or discussing effective qualitative sampling in a systematic way was difficult” (Koerber 457, my emphasis). I imagine that reason more than any other could explain, at least in part, why it is so necessary for those in technical writing to continue working toward a (re)definition of the field and of the technical writer’s role. I did think, though, that Koerber did a fine job of describing the “three major categories of sampling…convenience, purposeful, and theoretical” (462).

Charney pulled me into her camp when she stated in her introductory paragraphs: “It seems absurd to assume that anyone conducting a qualitative analysis or ethnography must be compassionate, self-reflecting, creative, and committed to social justice and liberation. Or that anyone who conducts an experiment is rigid and unfeeling and automatically opposes liberatory, feminist, or postmodernist values. But such assumptions underlie the current critiques…” (283). Indeed, the tension between qualitative and quantitative cannot be so simple as this, so black and white. It is in fact “absurd” to me that researchers on both sides, or those outside of either camp, would generalize so drastically, drawing a line in the sand between two arguably complementary methods of research and study. But for those in the sciences, or who favor quantitative, or are generally just more practically minded and less creatively minded, how do we persuade them to loosen their vice-like grip on the need for answers, on the need for evidence? How do we convince them that neither quantitative nor qualitative methods can necessarily “deliver up authority” (Charney 283)?

In an attempt (weak though it may be) to draw some sort of connection between or offer some sort of insight into Slavin’s “Practical Guide to Statistics,” I question Linda Flower’s position (as quoted by Charney) “that statistical evidence has meaning only as part of a cumulative, communally constructed argument” (287). To me, this sounds a lot like how we’ve been defining “knowledge” in our class discussions these past weeks, but also makes me question the validity of something so seemingly definite as a statistic derived from cold hard numerical facts. How can a statistic only have meaning through communal construction?

Finally, I can’t help but observe that sometimes, ever so briefly in our readings, there seem to be elements of such cattiness revealed between scientists and compositionists, or tech writers and engineers, or professionals and academics, or what have you. Charney writes: “Compositionists readily assume that disciplines that adopt scientific methods do so for reflected glory and access to institutional power” (288). If this is true, which I venture to say it is more oft than it isn’t, who is at fault here—the two parties duking out for that prized position atop the pedestal? Or the third party—i.e. the institution—that has set in motion this competitive edge, placing more weight and more value on the results garnered from the scientific method than those gathered from qualitative research?

 

WORKS CITED

Charney, Davida. “Empiricism Is Not a Four-Letter Word.” Central Works in Technical Communication. Eds. Johndan Johnson-Eilola and Stuart A. Selber. New York: Oxford University Press, 2004. 281–299. Print.

Koerber, Amy and Lonie McMichael. “Qualitative Sampling Methods: A Primer for Technical Communicators.” Journal of Business and Technical Communication 22.4 (2008): 454–473. Sage Publications. Web. 4 June 2012.