This infographic represents a love letter, from me to the educational research community at large. Over the last year, I have engaged in two large, systematic literature reviews, designed an educational research instrument, co-written an educational research grant, and completed and defended my dissertation research in educational research and evaluation. These projects have not been limited to one field; rather, they have spanned subdisciplines (e.g. library, social media, student engagement), sectors (e.g. K-12, adult, higher education), stances (e.g. policy, psychology, teaching and learning) and funding structures (e.g. academic, professional, and independent).
I have been brought onto several of these projects as a methodologist. My role was to describe and critique the approaches and methods used by researchers within these different educational research communities. In this capacity, I have reviewed approximately 800 educational research articles this summer alone. What I have found is systematically alarming, but it is a story that will unfold through a series of published articles, conference presentations, and blog posts over the next year. This blog post and downloadable infographic marks a starting point for the story.
In engaging with colleagues, authors, students, and casual readers of educational research literature, I have discovered that many people have a narrow perspective on how to talk about published research. Oftentimes, their discussion stops and starts with the terms “Qualitative,” “Quantitative,” or “Mixed Methods.” Alternatively, it will start with data collection techniques: “This is a survey” or “The authors did a bunch of interviews” are common ways to describe work.
The problem with this approach is that it ignores the fundamental truth that an author’s philosophical stance (critical, constructivist, postpositivist, etc) will impact every other aspect of their work. As my friend Maha Bali has argued, positivists and constructivists can both produce qualitative research, but it will have very different flavors to it. To call something “qualitative” is not enough to describe exactly what it is.
Furthermore, the pathway from constructivist to qualitative to data collection (think interviews) and analytic methods (think thematic coding) is not automatic and cannot be assumed. Some interviews are almost as prescriptive as questionnaires while others are free-flowing. Observations can yield qualitative or quantitative data. Thematic analysis can be inductive or deductive, emerging or compliant with a pre-existing framework.
All of this is to say that we must think and talk bigger than data collection and analytic methods when we describe research. If we fail to identify the underlying philosophies and assumptions of the work, we fail to understand the shortcomings, biases, and strengths inherent to it. We fail to ask the right questions and grasp the bigger picture.
Every research design textbook I have ever read includes a single sentence: “Research questions drive the design, not the other way around.” This is true of good research. Therefore, when trying to describe the quality of a corpus of research literature, it behooves us to look to the nature of the questions being asked. Are they focused on describing the nature of what is? How things relate? What causes what?
Paying attention to these things is essential for critiquing the structure of the individual study, but there’s more – Identifying trends in research questions can tell you a lot about the field itself. For example, young or rapidly changing fields need to establish what is before they can move to why things happen. Funders need to learn to value strong descriptive research in these areas. However, a continued reliance on descriptive designs in the presence of previously established understanding suggests something may be broken in terms of the scholarship being performed, maybe a systematic lack of funding or support, vision or strategy, researcher understanding or training. Further investigation is warranted.
The point: Research questions matter. Pay attention.
Obviously, research methodology matters as well. Qualitative, quantitative, and mixed methodologies have definitions and criteria and support specific research frameworks or study designs. There are textbooks. There are articles. There are classes. There are teachers. There are methodologists who can help others do these things well when they don’t have an interest in learning it themselves.
I bring this up because of a tweeted conversation I had with my former dissertation advisor, Jon Becker, earlier today. He let me know – probably just to irk me, because this will always work – that he was peer reviewing an article that had self-identified as “mixed methods” because it had included one open-ended question on the end of a quantitative survey.
I doubt that Creswell and Plano Clark would approve any more than I did.
I trust Jon to be an impenetrable wall against this sort of behavior, but unfortunately Jon does not peer review every educational research article. Let me say this loudly:
The widespread sloppy or ignorant handling of the “mixed methods” or “qualitative” labels within the educational research literature is shocking. When critiquing articles or a body of literature as a whole, it is essential for all of us to question how authors are using these labels. Are they accurate? And if they aren’t accurate, what can we surmise about why or how they are being used?
- Can you really perform an “in-depth qualitative case study” based on the findings of one quantitative survey (true story; no)
- Can you report out a proper phenomenology or ethnography without a single quote or rich description of participant context (true story; no and no)
- Should you be able to read over 100 self-identified qualitative studies without a single reference to memberchecking or auditing? (true story; no)
- Can you claim mixed methods with just one open-ended question on the end of a survey (true story; no)
- Should you publish a paper on the basis of a single focus group conversation (30 minutes) with an n of three? (true story; no)
- Why did these examples get through peer- and editorial review? And what are we going to do about it?
Therefore, I offer up this infographic as a place to start a conversation: a conversation with students, researchers, publishers, peer-reviewers, and ourselves. It is not comprehensive. It is not an expert piece (although it has gone through a form of peer-review on Twitter, led by the marvelous Maha Bali herself). However, this infographic serves as a reminder of the following facts:
The misuse of methods is rampant and labels are not to be blindly trusted.
Research questions tell a story oftentimes bigger than the article itself.
Data collection and analytic tools can be used in different ways by different research traditions to produce very different outcomes.
Philosophy shapes everything.