This is a (slightly) more polished version of my previous post.
My dissertation research is focused on creating an assessment protocol for connected learning courses in higher education environments. My work begins with the premise that connected learning is an inherently participatory act. Connected Learning emerged in the context of the digital age, which, according to Henry Jenkins and the MacArthur Foundation, supports a participatory culture. This speaks to the fact that people don’t just consume information on the Internet; they curate, annotate, critique, like, mash-up, comment on, tweet, link to, remix, repurpose, and otherwise transform digital information. In other words, they participate in how they and others understand the world. Since Connected Learning empowers student to be impactful and intentional citizens of this digital world, it makes sense that it requires significant student engagement and participation for connected learning to occur.
The idea of learning as participatory is not a new concept; many have called learning an active process: In the modern age, first there was Dewey and Montessori, Lindeman and, later, Lave and Wenger. In the 1960s Jerome Bruner suggested that the work of learning is an active dialogue: one of constructing and reconstructing hypotheses. Freire and Mezirow said similar things..in short, there is little revolution in suggesting that active participation and social interaction are acts of deeper, meaningful learning. However assessing participation, which is generally considered a transient action rather than a tangible product, can be difficult. And as we tend to build what we can assess, the difficulty of assessing participation can impact the emphasis we place on it within the classroom.
The assessment literature suggests that we attempt to assess or document learning for three reasons:
• To evaluate curriculum and instructional practice
• To provide formative feedback for students and instructors and
• To assess student performance for the purpose of making decisions around student advancement or promotion.
Therefore as I work to create an assessment protocol for connected learning, I am hoping to create something that can serve all purposes by:
• Generating potentially comparable and generalizable metrics
• Providing real-time data useful for instructor, peer, or self-assessment
• Offering a more sophisticated and elegant approach to assessing classroom participation
I find Salomon’s (1993) work on collaborative knowledge creation useful because he describes three forms of participation that are particularly relevant to connected learning environments: contributing pieces to the whole; acting as nodes or connectors within a network; and interpreting shared meaning. But how do we begin to think about these forms of participation in terms that can be operationalized for the purpose of assessment? A deep reading of the connected learning literature suggests that participation – or contribution, connection, and interpretation – might be more fully and concretely defined through the constructs of group cognition, associative trailblazing, and creative capacity.
Group cognition is the ability to locate and then move oneself around within class discourse and the course content. In the context of class discourse, students with high levels of group cognition are able to make their points efficiently and effectively, moving across digital platforms and using a variety of media formats as needed. In the context of course content, these students understand how all the pieces of the course curriculum fit together. This enables what Carl Rogers called the “freedom to learn” where students individualize their navigation through a body of information to suit their own interests and needs.
As a concept, group cognition borrows from many established constructs such as Donald Schon’s action and reflection-on-action. But in practice, it closely resembles Doug Engelbart’s concept of bootstrapping the collective IQ in which individuals focus on high level issues of process to reframe inquiry in a way that significantly advances individual and group positions. Group cognition directly relates to quantity of contribution – for example – the number of posts and tweets a student contributes or their degree of network centrality in a social network analysis. But it also relates to how students choose to contribute – for example – their fluency of movement across digital platforms, which can be captured in ratios of their activity as well as in basic descriptive statistics.
Associative trailblazing is about making visible the connection between concepts and between concepts and people. The term arises from the work of Vannevar Bush who first described the associative trail in his 1945 essay As We May Think. He was deeply concerned that our traditional indexing systems that store information on one library shelf or in one disciplinary journal creates a barrier for the right people gaining access to the right information at the right time. He argued that information storage should resemble the way we think, which he called associative trails. We all think in associative trails, but there is power in making them explicit; Doug Engelbart described this method, the one I call associative trailblazing, as one in which you “externalize your thoughts in the concept structures that are meaningful outside; moving around flexibly, manipulating them and viewing them.”
Connected Learning encourages students to participate by documenting the connections they make between different types of information in different contexts. The use of hyperlinks is extremely helpful in this process because, as Tim Berners-Lee said, hyperlinks can “point to anything be it personal, local or global, be it draft or highly polished.” The purpose of associative trailblazing is to make connections visible. It is often accomplished through the practices of replying, mentioning, retweeting, and linking which are all acts captured by the digital platforms upon which connected learning takes place. Betweenness centrality in a social network analysis documents the practice of bridging people and subgroups within a community. The analysis of the content of hyperlinks is also possible.
Creative Capacity. Creative capacity transcends the ability to acquire course content and involves the process of interpretation in two ways: First, creative capacity is the ability to transform aggregated information or stored, static data into a repurposed, remixed, or otherwise adapted knowledge product. Students must synthesize or find meaning in all of the information they previously connected. Jerome Bruner described it as going beyond the information given. He wrote: “One of the greatest triumphs of learning (and of teaching) is to get things organised in your head in a way that permits you to know more than you “ought” to. And this takes reflection, brooding about what it is that you know. The enemy of reflection is the breakneck pace.” Second, students must interpret or create meaning around the collaborative knowledge product for their individual context. This relates to transfer of knowledge and the building of personal experience as well as personal reflection. Creative capacity is a form of interpretation that typically results in a tangible product that can be assessed either by an instructor, peers, or the student herself. Keyword and content analysis may also play a role in assessing creative capacity.
In assessing connected learning, we must find a way to capture these qualities of participation but the challenge does not stop there – Davies (2010) argues that digital age assessments should also be flexible – appropriate across disciplines and educational formats and adaptable for instructor-, peer-, and self-assessment – with an emphasis on peer- and self-assessment. They should also be scalable, or appropriate for all class sizes. This last criterion of scalability triggers the question: “How do we blend qualitative and quantitative, automated and non-automated data collection into a practical as well as meaningful forms of student assessment?” Which brings me to the third assumption of my study: If we are to create meaningful, flexible, scalable assessments of connected learning we will need to take advantage of the digital environments in which connected learning takes place.
In other words we need connected assessments for connected learning.
Based on preliminary research using participant data from two connected learning courses, I have proposed a framework for connected learning assessment. It involves capturing contribution, connection, and interpretation – the 3 forms of participation – through the digital actions associated with connected learning. These acts are automatically captured by the social media learning platforms routinely used in connected learning environments, such as WordPress and Twitter.
What you are looking at here (Slide 10) is a part of an assessment blueprint, aligning common connected learning activities with the form of participation and the proposed units of assessment. These operationalizations span several forms of analyses and we’ll get into that in just a minute. You can also see that there is a place for assessing content of student work as well as students’ digital practices. And I’m experimenting with a variety of ways to get at content analysis by truly meaningful yet scalable means.
Here (Slide 11) I’ve shifted over several columns in my blueprint to show a variety of ways in which these data might be analyzed for the purpose of student assessment, as well as the instruments that I can use to perform the analyses. Briefly, KBDex is a novel open source software that combines keyword-driven content analysis with social network analysis to allow instructors to look at relationships between keywords and between keywords and specific students within the context of blog posts and in a longitudinal fashion. There are documented cases of its use in formative as well as summative assessment performed by instructors and students. I’m currently working with the software developers in Japan to see if KBDex is a good fit for my project.
And so really I’m at the beginning of my work, but the next steps are fairly clear: 1. Confirm that these units of assessment are as easily accessible as I think they are. I’ve already done some preliminary work to suggest that they are. 2. Then I’ll need to confirm that these analyses provide meaningful information about student performance – and I’ll be doing this by triangulating my results with a traditional content analysis of a portion of the same data. 3. Finally I’ll need to streamline the analysis process so that it can be performed in the context of formative student assessment. I am confident that this can be done because of what has been documented related to the use of content and discourse analysis and social network analysis in Japan, Netherlands, Australia, and the UK.