Tagged: sullivan and porter

Separated at Birth: Theoretical Sampling Methods and Methodology as Explicitly Stated Praxis

I am quite the novice when it comes to research methods. I never took a research methods course as an undergraduate, and I have not yet taken a research methods class in graduate school (and judging by one of Eric’s comments, from his blog post, I may not have the time or funding to take such a class at WVU). Thus it pleases me to see some introductory texts to qualitative and quantitative methodological concepts amongst this week’s readings, like Koerber and McMichael’s “primer” on “Qualitative Sampling Methods” (454) and Slavin’s “A Practical Guide to Statistics”. It seems particularly important that technical communicators, or at least those people, like me, wishing to become technical communicators, learn more about research methods since the traditional view of the field, as Lay might argue, is in a similar stratosphere as science due to its affiliation “with the quantitative and objective scientific method” (151).

I also noticed a couple of similarities between two of the “methods” discussed in this week’s readings. The theoretical sampling method the Koerber and McMichael paper introduced to me (465) appeared much like the “method as explicitly argued praxis” from the Sullivan and Porter work (310). “Praxis”, according to Sullivan and Porter, is a “practical rhetoric” (305). When praxis is applied to research (and I’m assuming that Sullivan and Porter mean both qualitative and quantitative research), researchers acknowledge that multiple community frameworks/ideologies, and not just the one implied by the use of one methodology, are applicable to their studies. How in actuality, though, researchers succeed in not only “accept[ing] that the methods [they] normally choose to use provide powerful filters through which [they] view the world” (Sullivan and Porter, 310), but “become critical of methodology” (311) and apply praxis, comes about in two ways, as far as I can tell. Researchers alter their methodologies according to the context of their situations, and they explicitly tell the readers that they veered from a single methodological structure when composing their research.

Koerber and McMichael’s theoretical sampling method, though discussed by those authors as a sampling practice for qualitative research, seems to rely on the context of a study – “sampling emerge[s] along with the [researcher’s] study itself” (465). Sampling methods change as a study moves along, just as it should in “method explicitly argued as practice”.

In general, the two ideas matched up fairly well. However, when I took a second look at the theoretical sampling method, I realized that it did not resemble methodology as praxis as closely as I had imagined. The main reason I noticed a difference between the two theories was that their critical stances seemed much more dissimilar than I had initially thought.

Koerber and McMichael take a relatively casual stance towards critical research. During research, when a general direction for data starts to formalize, those who employ theoretical sampling methods will look for data that disproves any early assumptions (Koerber and McMichael 465). As I already mentioned, Sullivan and Porter hope that researchers become “critical of methodology”, but they can do this a few ways. One of them, as posited by Sullivan and Porter is to simply refuse to follow a method strictly by the numbers. That is not to say that researchers can be critical just by haphazardly changing the courses of their studies; the contexts of their situations must demand, as they usually do, that researchers stray from the methodological norm (Sullivan and Porter 308). Stated less simply, a researcher can be critical of methodology by “see[ing] the activity [of following the generally accepted framework for a methodology] as at least in part ‘constructing methodology’” (Sullivan and Porter 311).

That last quote, though, shows that the two ideas/methods/theories, while not quite identitical twins, are still closely related. By using a phrase like “constructing methodology”, Sullivan and Porter stress that all researchers are implicit in the social construction of the conventions of methodologies (308). Furthermore, the socially constructed ideas implicit in the term “methodology” can be subsumed under the umbrella term of theory – Sullivan and Porter outright say that view “research methodologies as theories”. Then again, Koerber and McMichael might think of things the same way: they write that sampling changes according to new data directions, but also that researchers “[adjust] the theory [they’re employing] according to trends that appear in the data” as well (465), treating the term “theory” like a synonym for “sampling method”.

Works Referenced

Koerber and McMichael reading

Lay, Mary M. “Feminist Theory and The Redefinition of Technical Communication.” Central Works in Technical Communication. Eds. Johndan Johnson-Eilola and Stuart A. Selber. New York: Oxford University Press, 2004. 146-159. Print.

Porter, James E., and Patricia Sullivan. “On Theory, Practice, and Method: Toward a Heuristic Research Methodology for Professional Writing.” Central Works in Technical Communication. Eds. Johndan Johnson-Eilola and Suart A. Selber. New York: Oxford, 2004. 300-313. Print.

Slavin reading

4: Up On a Pedestal—Qualitative or Quantitative?

The assumption underlying the treatment of qualitative sampling in many current technical communication texts seems to be that qualitative research can be interesting to readers in our own field but that it will never carry weight with larger audiences outside our field because its findings cannot be generalized in the same way that those of quantitative research can. We believe that this imbalance—an imbalance that seems to persist despite researchers’ calls for complementarity between quantitative and qualitative approaches—is at the heart of the gap that exists in regard to qualitative sampling.

Koerber’s conclusion (470) struck me as particularly illustrative of at least one of this week’s themes: the binary between quantitative and qualitative. I feel the need to play Devil’s advocate, however, by arguing that his conclusion seems too general. Surely, there are more people out there—scholars, theorists, researchers, scientists, writers—who believe there is more good in the complementarity of qualitative and quantitative studies than there is in forcing a gap between the two. Surely, even the most practically minded scientist would admit that qualitative studies only add to their quantitative work, opening new avenues of research, or at the very least provoking new questions to be explored.

I was also struck, as I have been with some of our other readings, by these authors’ and their colleagues’ reiterations of how difficult some technical writing concepts are to teach: “But when they discussed qualitative research techniques such as interviews, focus groups, ethnography, and textual analysis, they did not describe procedures for sampling systematically or provide terms to help novice researchers become familiar with qualitative sampling. … Although our field has many examples of excellent qualitative research, this research lacks a consistent vocabulary for sampling methods, so teaching or discussing effective qualitative sampling in a systematic way was difficult” (Koerber 457, my emphasis). I imagine that reason more than any other could explain, at least in part, why it is so necessary for those in technical writing to continue working toward a (re)definition of the field and of the technical writer’s role. I did think, though, that Koerber did a fine job of describing the “three major categories of sampling…convenience, purposeful, and theoretical” (462).

Charney pulled me into her camp when she stated in her introductory paragraphs: “It seems absurd to assume that anyone conducting a qualitative analysis or ethnography must be compassionate, self-reflecting, creative, and committed to social justice and liberation. Or that anyone who conducts an experiment is rigid and unfeeling and automatically opposes liberatory, feminist, or postmodernist values. But such assumptions underlie the current critiques…” (283). Indeed, the tension between qualitative and quantitative cannot be so simple as this, so black and white. It is in fact “absurd” to me that researchers on both sides, or those outside of either camp, would generalize so drastically, drawing a line in the sand between two arguably complementary methods of research and study. But for those in the sciences, or who favor quantitative, or are generally just more practically minded and less creatively minded, how do we persuade them to loosen their vice-like grip on the need for answers, on the need for evidence? How do we convince them that neither quantitative nor qualitative methods can necessarily “deliver up authority” (Charney 283)?

In an attempt (weak though it may be) to draw some sort of connection between or offer some sort of insight into Slavin’s “Practical Guide to Statistics,” I question Linda Flower’s position (as quoted by Charney) “that statistical evidence has meaning only as part of a cumulative, communally constructed argument” (287). To me, this sounds a lot like how we’ve been defining “knowledge” in our class discussions these past weeks, but also makes me question the validity of something so seemingly definite as a statistic derived from cold hard numerical facts. How can a statistic only have meaning through communal construction?

Finally, I can’t help but observe that sometimes, ever so briefly in our readings, there seem to be elements of such cattiness revealed between scientists and compositionists, or tech writers and engineers, or professionals and academics, or what have you. Charney writes: “Compositionists readily assume that disciplines that adopt scientific methods do so for reflected glory and access to institutional power” (288). If this is true, which I venture to say it is more oft than it isn’t, who is at fault here—the two parties duking out for that prized position atop the pedestal? Or the third party—i.e. the institution—that has set in motion this competitive edge, placing more weight and more value on the results garnered from the scientific method than those gathered from qualitative research?

 

WORKS CITED

Charney, Davida. “Empiricism Is Not a Four-Letter Word.” Central Works in Technical Communication. Eds. Johndan Johnson-Eilola and Stuart A. Selber. New York: Oxford University Press, 2004. 281–299. Print.

Koerber, Amy and Lonie McMichael. “Qualitative Sampling Methods: A Primer for Technical Communicators.” Journal of Business and Technical Communication 22.4 (2008): 454–473. Sage Publications. Web. 4 June 2012.