POTTER 1: When using numerical evidence, qualitative researchers often use numerical properties. For example, they cite Gallup surveys or Nielsen ratings as secondary sources to make a point. If we follow the argument that translating concepts or attitudes into numbers decontextualizes meaning, using numerical properties is paradoxical. Why do purely qualitative researchers would trust using Gallup, Census or NES data more than their own data collection? Also, some researchers avoid using numbers completely. For example, some say that using test scores to evaluate complex cognitive tasks is useless. I understand the point, but then how can we evaluate large policies? What do they suggest?
POTTER 2: Potter describes unknown collaboration as when authors share credit but they do not specify who did what (the division of labor). Should we state who did what in our collaborative research? What’s the proper way to do it?
POTTER 3: In inductive research, Anderson (cited in Potter) asserts that qualitative researchers do not begin with a theory or hypotheses; instead they begin with a natural curiosity to learn more about who, what, and when of something. Is there such a novel, naïve curiosity? Something must have led the researcher to explore certain phenomenon. He or she probably has some expectations. Feminist studies, for example, always begin with the expectation of unequal and gendered relations of power.
FLICK 1: Flick discusses comparative studies, but I think that the author missed an important tenet of comparative research designs: the difference between most similar and most different systems designs. Most similar systems design refers to the choice of cases that are more or less the same in every other aspect except the one of interest for the researcher. For example, studying the influence of East Coast and West Coast mentality on news coverage of women by comparing the New York Times vs. the Los Angeles Times (both are newspapers of record, influential, operate in big cities, and so forth, but are located in different geographical regions) Most different systems designs, in contrast, refer to the study of cases that differ in most aspects but have one or two commonalities that explain a similar outcome process. For instance, studying how pervasive is racial stereotyping in the coverage of crime stories by analyzing a big national newspaper, a local TV station and a citizen journalism outlet (all three reach different markets, have different formats, resources, and so on, but share similar news values and, thus, cover crime in a racialized way).
FLICK 2: Fick explains biographical research as a retrospective research design in which past events are analyzed in respect to their meaning for individuals. But how do we apply this design in a communication environment? I’m particularly curious because people’s memory is so fickle in regard to media experience and communication patterns (perhaps because we are constantly communicating and media are so pervasive that remembering a particular episode is hard). In other words, what is it about biographical research that could help us learn something unique in communication research?
FLICK 3: In the section of generalization goals and representational goals of qualitative research, Flick mentions that presentation goals should be considered when conducting qualitative research (e.g., will you write an essay based in your results? Or will you write a narrative account of your findings?). But I was wondering how the consideration of the final product affects the data collection stage. Because I’d suppose that if I’m planning to write a book or conduct a dissertation I’m going to collect data differently that if I’m just planning to write an article for a conference and/or journal. At least, the breadth of the data collection is going to be different. Or not?
MANNING & CULLUM SWAN 1: Manning and Cullum-Swan’s (p. 252) definition of sign as “something that represents or stands for something else in the mind of someone” reminded of the concept of schema and network models of memory activation (one idea connected to another through networks of related concepts). It always surprises me when I find that different literatures refer to the same phenomena, define it in similar terms, and yet neglect to acknowledge each other or see the connection between them. It’s the same with discourse analysis: I couldn’t help but think all the time about framing analysis. Yet, only once or twice this connection was explicitly made.
VAN DIJK 1: van Dijk argues that the discourse approach in media research pays special attention to ideological and political dimensions of media messages. Why is this? Is it because it has been related to these approaches in the past? Because it should adopt a critical or political-economical stance? What about using discourse analysis for other purposes not related to ideology or politics?
VAN DIJK 2: van Dijk says that most of our social and political knowledge and beliefs about the world derives from news reports. What about other sources, particularly other people? Why not adopt discourse analysis to study people’s communicatory behaviors? Ever since the 1950s we know that interpersonal communication is more influential and persuasive than media messages. Why is it that discourse analysis has not been employed as frequently to study people’s everyday talk? To what degree this is due to the ease of access to transcribed news reports (e.g., Lexis Nexis)?
VAN DIJK 3: “The analysis of the ‘unsaid’ is sometimes more revealing than the study of what is actually expressed in the text” (van Dijk, p. 11). True, and that’s the main shortcoming of quantitative content analyses. But how can two researchers agree on what is unsaid? The world of the unsaid is vast and open, as opposed to the world of the said. The said is verifiable. The unsaid is not. Is more an act of faith to state what is the unsaid part of the text? Try this simple exercise of reading the following sentence: “Obama is the _____ president said Bill O’Reilly”. How many of you think that the missing word is negative (e.g., “worst”)? And what if the missing word was just “United States”? In any case, I’m not saying that manifest content is or should be the only aspect of a text worth analyzing. But I’d be very careful when defining discourse analysis as a research tool for the analysis of the unsaid content.
GROSSBERG 1: In the interview of Stuart Hall, he mentions that there is a difference between asserting that with postmodernism meaning does not exist and agreeing that postmodernism brings multiple meanings, or an “endlessly sliding chain of significance.” What are the implications of this controversy on the way we approach the study of meaning? Do we accept that ours is one way of “reading” a text, one of many others, or should we strive for a “consensus,” that is, meaning is whatever a sizeable group of researchers or accumulated evidence yield?