Tomorrow, Laura Pasquini, Ron Hannaford, and I will be presenting at the WCET Annual Meeting on methods, practices, and implications of Social Media Research. This came about because Ron, Director of Distance Learning at Biola University, was inspired to coordinate a panel to address the fact that higher education institutions need empirical data to direct and support faculty use of social media in classrooms. He and Laura Pasquini had already worked together in the context of educational technology; Ron and I shared several heated discussions on the principles and practice of digital pedagogy at Online Learning Consortium’s Innovate Conference in April.
Although Laura, Ron, and I champion online learning, we bring very different perspectives to the table – perspectives that are fueled by our respective professional positions, educational backgrounds, and philosophical approaches. I can’t speak for them here, but I can provide you with some of my own ideas on the subject.
In titling our presentation “Social Media Research: Methods, Practices, & Implications,” I think we are dangerously close to missing the entire point. We need to step back from a conversation of tools and technology to consider the big picture of the pedagogies we aim to develop and the nature of story we hope to describe.
Let me back up. Over the summer, I worked with Laura Pasquini and Jessica Knott on a systematic literature review on the educational use of social media in higher education. Laura P. and Jessica have been working on this for a little over a year, collecting and examining different aspects of the literature; I was brought in this summer specifically to help examine the research methods and study designs found within. It was a massive project, and as I’ve written previously I read over 500 articles, many of them twice.
Based on a preliminary analysis, two types of study dominate the edusocmedia research landscape: the quantitative survey and the action research/single site case study (represented here as a single entity because they almost always consisted of the same thing: descriptive pieces written by a practitioner who had developed and implemented something in their own classroom). Regardless of the study design, a large number of studies focus on either use or perceptions of social media (in general or for learning) by students, faculty, and/or administration.
I find this unfortunate, or, at the very least, unfinished.
In reviewing the edusocmedia at the depth that I did, I found a striking absence of experimental designs. I believe this is the point some of my colleagues would like to drive home and then change. In the U.S., we tend to privilege the wisdom of experiments, or at least well designed quasi-experimental designs, over other forms of research. Many of my colleagues are looking for more (quasi)experiments involving structured cohorts and systematic approaches to interventions – at the tool level. What impact does classroom Twitter have on the test score of group A versus group B? How does use of Facebook in that class increase student engagement (via validated instrument) in this group versus that? Does Twitter impact critical thinking more than the Facebook when implemented sequentially or simultaneously?
This is where I tend to veer away from some of my colleagues. I like (quasi)experimental designs – I really do – and I find the lack of (quasi)experimental designs in the literature problematic. However, I do not consider them the gold standard, because I believe in a balanced relationship between (quality) experimental and (quality) nonexperimental, (quality) qualitative and (quality) quantitative work.
I believe in a richly woven (and quality) story. I believe it is a mistake to hold a discussion about the need for more (quasi) experimental designs in edusocmedia literature without situating it in a larger critique of the literature, including the questions we ask, the outcomes we seek, and the quality of the work we produce.
This is why I’m on the panel.
Let’s talk about quality of the literature for a moment. Systematically speaking, the edusocmedia research has a quality problem. To be clear, I don’t think it’s isolated to edusocmedia or even educational technology. Furthermore, there are some beautiful articles out there. I am speaking at a general level.
Sampling. Surveys are frequently given to convenience sample populations. Qualitative work often involves extremely small sample sizes. I am trained in qualitative research; I understand and live by terms such as snowballing, saturation, in-depth case studies and similar. However, a one-time “focus group” of less than five students (without additional data) does not seem like the basis for a publication.
Research questions. As previously mentioned, we tend to focus on perceptions and current use of social media among students, faculty, and administrators. These are important questions; their answers lay a foundation for work on access, systems level change, and considerations for pedagogical scaffolding. However, I believe the literature is saturated in these areas and offer a relatively solid picture over the last decade on a fairly international scale (one exception is South America, but I’m unsure whether this is related to a lack of published research or a lack of research published in English). I am not suggesting you should not do your own needs analysis when necessary; I’m suggesting that you do not publish it unless it is remarkably different than what has already been published.
Unfortunately, given the consistently low number of article citations noted via Google Scholar, I wonder if we are reading the literature at any depth prior to publishing our work. #Edusocmedia literature seems to be the thing that everyone writes but nobody reads…or cites.
Study designs and collection. We tend to fall back on a nonexperimental descriptive design by default. How many designs could be strengthened by the use of a pre-/post intervention survey rather than the use of an end-of-course evaluation? In terms of qualitative research – this is where my heart just plain hurts. Qualitative research requires knowledge and skills, just as quantitative work does. There are definitions and standards to uphold. There are indicators of rigor to follow. As a group of scholars, we need to raise our understanding and skill level in qualitative research.
Analysis and Reporting. Transparency and replicability. Reported research is not actually research unless enough information is provided so that a reader to repeat the process. Methods sections. Explanations of how data were analyzed – qualitative and quantitative. This is essential. As a group of scholars, we need to take our own research seriously. We need to identify our study designs correctly and report our data analysis methods (not just our course designs) with enough detail that others can replicate our work.
The Big Picture
I am concerned about improving the quality of what we do. However, I am also concerned about building a compelling, cohesive body of literature that is as comprehensive as possible. In other words, we – as a body of scholars – not only need to have a talk about how we do research, but why we do research.
We need to rethink our research questions, moving beyond use and perception of social media; however, we also need to move beyond social media.
Social media is a collection of tools. It is not a pedagogy. To show you what I mean, I’ve pulled a graphic from one of my other posts to serve as an illustration. Worldview. Pedagogy. Practice. Tool. Assessments. Worldview and pedagogical assumptions drive the use of tools, such as social media. To research and discuss social media without the broader context of philosophy and pedagogy is – at best – a waste of time and – at worst – harmful to collective knowledge. If we limit our research to the level of the tool (e.g. do students perceive a specific social media tool to be an effective learning tool?), we risk turning that tool into a pedagogical “black box:” We will have failed to analyze what makes that particular tool effective…and why.
When we situate ourselves entirely in specific forms of social media, we risk irrelevance, particularly if we are publishing in traditional journals that might take a year or more to see the light of publication.
Tools change quickly – when faculty choose them, they are often doing so at the whim of administration (I’m thinking about learning management systems or clickers in particular) or the social media industry (Twitter, Facebook. They have a history of making significant changes and not once did they consult me about how it might impact my students or classroom).
I applaud the #edusocmedia field for exploring how students perceive the use of Pinterest (for example) in different disciplines and contexts. However, it is just as important to ask what pedagogical actions Pinterest facilitates (collaborative resource curation…maybe?) and how those actions impact student learning (enhancing students’ ability to critique information sources…maybe?). If we can elevate the discussion of tools to how they function (e.g. curation, collaboration, hyperlinks, multimodal creativity) rather than what they are, then our work will likely stay more relevant for longer periods of time.