For the last two months, I have been working with Laura Pasquini and Jessica Knott on a systematic review of the educational research on the use of social media in the higher education classroom (#edusocmedia). The history of this review significantly pre-dates my two month-long involvement. In fact I was brought in for only one part of this project: to help identify and comment on the study designs and data collection/analysis techniques employed in the articles.
By the end of this week, I will have personally sifted through more than 500 articles and over half of those twice. I found that many articles failed to include a complete methodology section, requiring me to read the entire article to find what I needed. Those of you who follow me on Twitter know that I have not always enjoyed this process, and I appreciate your patience as you cheerfully (or quietly) dealt with my snark.
If yall keep giving me crap methods sections in your articles, I may need to purchase another glass of wine.
— Laura Gogia (@GoogleGuacamole) August 12, 2016
I won’t go into too many details about the findings here, particularly since the formal analysis is not complete, but I am a little haunted by what I have found so far. Hundreds of the articles I have read focus on student and/or faculty perceptions of using specific digital tools (e.g. Facebook, YouTube, blogging) in the higher education classroom, while failing to adequately address the underlying pedagogical theory or design context in use.
Hundreds. Not dozens.
Furthermore, I have yet to find an article that challenges the traditional constructs of student assessment in higher education. The articles that look beyond perception or self-reporting tend to incorporate grades or tests (of content acquisition) as the indicators of learning.
You could argue that my observations are related to the framing of the initial article search. This is a review of the #edusocmedia research; social media is a tool, not a pedagogy. Furthermore, the review focuses on articles that incorporate empirical data; it’s possible that all the theory is locked up in the think pieces, many of which I’ve read but not studied in any systematic sort of way. Even so, I find my observations problematic. Among other things, I see a tendency to stay inside a traditional box of assumptions about teaching, learning, and education.
That being said, I also see myself as being part of the problem.
It’s easy to conflate pedagogy and tools, because pedagogy is expressed through the use of pedagogical tools. Let’s take blogging, for example. Blogging is a tool, but an instructor’s pedagogy significantly impacts how it is used. For example, an open educator who believes in the educational power of public expression and audience will have students blog on the open web. Others who prioritize the act of creating safe spaces will have them blog in a closed network. Those are two VERY DIFFERENT blogging experiences, with different learning outcomes to consider.
However, tools are not free of their own values, either. Let’s do a comparison at the tool level: blogging versus paper-based essays. Regardless of their pedagogical context (open or closed, if we carry over from the example in the previous paragraph), blog posts have certain affordances that paper-based content does not: multimodal composition, hypertext, ease of sharing and access to audience, and commenting capacity, just to name a few. It’s possible for an instructor’s preferences to limit the impact of these affordances, but the point is that they are present.
What I see in this massive body of literature are studies that focus their discussion and research design on the level of the tool. For example, they compare the grades of students who had access to a Facebook study group versus those who did not. They review the quality of publicly-curated videos on YouTube versus an assumed gold standard of instructor-curated materials. They ask students for their perceptions of microblogging and video conferencing versus face-to-face discussions.
Tool comparisons are useful, but only when contextualized by a larger understanding of their pedagogical affordances and the pedagogies at work behind their use. Why?
- Tools change quickly, but pedagogical approaches…not so much. One of the challenges to working with digital tools in the classroom is that specific tools change quickly. When we focus on the tool (e.g. Facebook) rather than the affordances that the tool offers (e.g. social interaction and opportunities to provide feedback, engage in multimodal expression, and curate information resources), we risk limiting the usefulness of our research. The tool we spent so much time and resources studying could be gone or out of style in a matter of a couple of years (MySpace, anyone?)
- We may be missing the bigger picture. When we draw conclusions related to the tools rather than an examination of the (supposed) pedagogical affordances, we risk missing a bigger picture. I applaud the #edusocmedia field for exploring how students perceive the use of Pinterest, for example, in different disciplines and contexts. However, it is just as important to ask what pedagogical actions Pinterest facilitates (collaborative resource curation…maybe?) and how those actions impact student learning (enhancing students’ ability to critique information sources…maybe?) I, for one, want some evidence to suggest resource curation is has a positive impact on student agency, critical thinking, and capacity for lifelong learning. If we limit our research to the level of the tool (e.g. do students perceive Pinterest to be an effective learning tool?), we risk turning that tool into a pedagogical “black box:” We will have failed to analyze what makes that particular tool effective…and why.
- We may fail to effect real change. If we accept that (higher) education needs to be reframed to be more relevant, inspirational, and impactful, then we need to move beyond a discussion of tools – digital or otherwise. Instead, we need to function at the level of the pedagogy itself. For example, when we assume traditional learning outcomes of content acquisition, we are assuming a traditional pedagogical approach regardless of our tool choice. Furthermore, when we limit our indicators of success to research methods that depend on student perception or self-reporting, we limit our ability to draw diverse types of conclusions. We need to rethink some things in how we do research.
I feel like I am representative of part of the problem.
For the past three years, my entire professional focus has been the theory behind and practice of integrating digital participatory culture into adult and higher education settings. In reflecting on my own writing and practice, I discover that I move fluidly – but not necessarily explicitly – between theory, design, and tools of digital education.
This is a problem. In fact, sometimes I stop and worry about how others – particularly those who are new to incorporating digital participatory culture in the classroom – may be interpreting my movement between theory and tools, as if blogging were actually a pedagogical theory rather than an approach. People might not be making the implicit leaps with me.
I need to be more explicit in my thinking, all the time, regardless of my perceived audience. To this end, I’ve compiled a summarized version of my approach to connected learning, which is my preferred approach to integrated digital pedagogy. What follows is not comprehensive, although it does provide some additional resources here and there for anyone who wants access to more information. The purpose of this is to demonstrate the different levels of pedagogical practice (theory, design, tools, and outcomes) that should be addressed explicitly in the educational research literature.
Connected learning is a progressive educational approach, steeped in experiential and social constructivist theory. It is Dewey, Montessori, Wenger, Bruner, and other like-minded individuals contextualized for the digital age. I have written extensively on pedagogical underpinnings of connected learning. Julian Sefton Green is a brilliant resource. However, a summary might go like this:
There are probably many ways to trigger students to engage in active, participatory, and situated (aka personally relevant) learning. Harnessing the open web is one approach. As a learning designer, I attempt to create opportunities for connected learning through:
- Openly networked learning spaces
- Networked participatory activities
- Multimodal composition
- Student agency
In the design section, I narrowed my focus to one approach to connected learning – specifically one that harnesses the open web. Therefore, my description of tools is also limited to those found on the open web. Even after applying this filter, I cannot begin to describe all the pedagogical tools available, but two of the most popular include blogging and microblogging (aka Twitter). Digital annotation through resources such as Hypothes.is is also steadily gaining ground in higher education contexts.
The educational assessment literature tells us that learning outcomes must be considered in the context of a unified educational approach – theory, design, tools, and outcome. If connected learning is designed in the pragmatic and social perspectives of Dewey, Wenger, and friends, then we must look to them to find outcomes:
- Student engagement and agency.
- Participation, cooperation, and collaboration.
- Knowledge application, transfer, and communication.
- Reflective capacity and self-awareness.