Tagged: charney

4: Up On a Pedestal—Qualitative or Quantitative?

The assumption underlying the treatment of qualitative sampling in many current technical communication texts seems to be that qualitative research can be interesting to readers in our own field but that it will never carry weight with larger audiences outside our field because its findings cannot be generalized in the same way that those of quantitative research can. We believe that this imbalance—an imbalance that seems to persist despite researchers’ calls for complementarity between quantitative and qualitative approaches—is at the heart of the gap that exists in regard to qualitative sampling.

Koerber’s conclusion (470) struck me as particularly illustrative of at least one of this week’s themes: the binary between quantitative and qualitative. I feel the need to play Devil’s advocate, however, by arguing that his conclusion seems too general. Surely, there are more people out there—scholars, theorists, researchers, scientists, writers—who believe there is more good in the complementarity of qualitative and quantitative studies than there is in forcing a gap between the two. Surely, even the most practically minded scientist would admit that qualitative studies only add to their quantitative work, opening new avenues of research, or at the very least provoking new questions to be explored.

I was also struck, as I have been with some of our other readings, by these authors’ and their colleagues’ reiterations of how difficult some technical writing concepts are to teach: “But when they discussed qualitative research techniques such as interviews, focus groups, ethnography, and textual analysis, they did not describe procedures for sampling systematically or provide terms to help novice researchers become familiar with qualitative sampling. … Although our field has many examples of excellent qualitative research, this research lacks a consistent vocabulary for sampling methods, so teaching or discussing effective qualitative sampling in a systematic way was difficult” (Koerber 457, my emphasis). I imagine that reason more than any other could explain, at least in part, why it is so necessary for those in technical writing to continue working toward a (re)definition of the field and of the technical writer’s role. I did think, though, that Koerber did a fine job of describing the “three major categories of sampling…convenience, purposeful, and theoretical” (462).

Charney pulled me into her camp when she stated in her introductory paragraphs: “It seems absurd to assume that anyone conducting a qualitative analysis or ethnography must be compassionate, self-reflecting, creative, and committed to social justice and liberation. Or that anyone who conducts an experiment is rigid and unfeeling and automatically opposes liberatory, feminist, or postmodernist values. But such assumptions underlie the current critiques…” (283). Indeed, the tension between qualitative and quantitative cannot be so simple as this, so black and white. It is in fact “absurd” to me that researchers on both sides, or those outside of either camp, would generalize so drastically, drawing a line in the sand between two arguably complementary methods of research and study. But for those in the sciences, or who favor quantitative, or are generally just more practically minded and less creatively minded, how do we persuade them to loosen their vice-like grip on the need for answers, on the need for evidence? How do we convince them that neither quantitative nor qualitative methods can necessarily “deliver up authority” (Charney 283)?

In an attempt (weak though it may be) to draw some sort of connection between or offer some sort of insight into Slavin’s “Practical Guide to Statistics,” I question Linda Flower’s position (as quoted by Charney) “that statistical evidence has meaning only as part of a cumulative, communally constructed argument” (287). To me, this sounds a lot like how we’ve been defining “knowledge” in our class discussions these past weeks, but also makes me question the validity of something so seemingly definite as a statistic derived from cold hard numerical facts. How can a statistic only have meaning through communal construction?

Finally, I can’t help but observe that sometimes, ever so briefly in our readings, there seem to be elements of such cattiness revealed between scientists and compositionists, or tech writers and engineers, or professionals and academics, or what have you. Charney writes: “Compositionists readily assume that disciplines that adopt scientific methods do so for reflected glory and access to institutional power” (288). If this is true, which I venture to say it is more oft than it isn’t, who is at fault here—the two parties duking out for that prized position atop the pedestal? Or the third party—i.e. the institution—that has set in motion this competitive edge, placing more weight and more value on the results garnered from the scientific method than those gathered from qualitative research?



Charney, Davida. “Empiricism Is Not a Four-Letter Word.” Central Works in Technical Communication. Eds. Johndan Johnson-Eilola and Stuart A. Selber. New York: Oxford University Press, 2004. 281–299. Print.

Koerber, Amy and Lonie McMichael. “Qualitative Sampling Methods: A Primer for Technical Communicators.” Journal of Business and Technical Communication 22.4 (2008): 454–473. Sage Publications. Web. 4 June 2012.