Stiching the Diss: Understanding the Assessment of Participation in Connected Learning

If you are interested in reading this, consider reading the update with slides

By way of introduction, I am working to create an assessment protocol for connected learning in higher education environments.  

The assessment literature suggests that we document learning processes for three reasons: (1) to evaluate curriculum and instructional practice; (2) to provide formative feedback for students, instructors, and the learning community in its entirety; (3) and to assess student performance for the purpose of making decisions around student advancement or promotion. And so, as I work to create an assessment protocol (first a “tool,” then a “toolkit,” now a “protocol”), I am hoping to create something that can serve all purposes by:   

  •   Generating potentially comparable and generalizable metrics.*
  • Providing real-time data useful for instructor, peer, or self-assessment.
  • Offering a more sophisticated and elegant approach to assessing classroom participation.
  • I focus on participation for this post because connected learning is an inherently participatoryact. People who engage in connected learning curate, annotate, critique, comment on, tweet out, link to, mash-up, remix, repurpose, imitate, support, criticize, and transform information as they learn. In describing the distributed cognition and collective discourse of connected learning environments, Salomon (1996) characterizes active student participation in three ways: as contributing to the whole; as nodes (connectors) in the network; and as interpreters of shared meaning. 

    When done well, these forms of participation – contributing, connecting, and interpreting –  have unique qualities that can be assessed which can be described as group cognition, associative trailmaking, and creative capacity. The following post introduces these qualities. I recognize that nothing I am writing is new or revolutionary but rather a synthesis of previously described knowledge found across disciplines such as cognitive psychology, information technology, and educational research.  


    Understanding Participation


    Group cognition: Contributing to the whole. Group cognition is the individual’s understanding of his or her current position and role within networked communities, contexts, or worlds. (Akkerman, 2007)  Reflective practice lies at the heart of group cognition and it helps students become mindful and (hopefully) responsible participants of the group. However group cognition enables more than locating and naming; it activates the individual’s ability to intentionally move from one place to another within the learning community and the curriculum (Campbell, 2014). Group cognition emphasizes individual effort – and agency – within a social stream of activity . Doug Englebart called this “bootstrapping;” the ability to change one’s position using existing resources to boost collective IQ.  I call it understanding the big picture for the purpose of increasing individual and group impact.
    Associative trails: Connecting within the network. In a global network that invites (requires) participation, innovation is more widely distributed across new places and actors (David & Foray, 2002).  The amount of knowledge generated in this environment is unfathomable and unprecedented. Even before the digital age, Vannevar Bush (1945) was deeply concerned about how  knowledge would be organized so that the right individual at the right time might discover it.  In his seminal article “As We May Think” Bush argues that indexing, by which information is stored on a library shelf or in one journal, (or anachronistically, in one computer folder), is the major cause of our inability to synthesize information into knowledge across disciplines. He described an alternative form of information storage, called associative trails.  The development of associative trails involves the explicit practice of linking information sources based on an individual’s thought process rather than artificial disciplines, media, or information formats. The effective use of tags and hyperlinks facilitate access to information from any number of directions simultaneously and, as Tim Berners-Lee suggests, links can “point to anything, be it personal, local, or global, be it draft or highly polished.” Associate trails, facilitated through links and tagging, enable intellectual flexibility and encourage connections across disciplines and spheres of learning. and encourage students to make intellectual connections. Students in connected learning environments learn how to be explicit in their associative trails, connecting information across disciplines and spheres of learning (personal, peer, and academic) and fostering the intellectual flexibility required in this era of distributed cognition.
    Creative capacity: Interpreters of shared meaning.  Scholars have written that innovation is a dominant activity in our society, as evidenced by the fact that the economic status of a nation has less to do with natural resources more to do with its ability to augment productivity (David & Foray, 2001). Creative capacity is the ability to innovate: to transform aggregated information (static, stored data sets) into a repurposed, remixed, or otherwise adapted knowledge product appropriate for the task or situation at hand.  The ability to create knowledge transcends course content acquisition and is a practice better suited for our continuously shifting reality (Paavola et al., 2004). Jerome Bruner described the capacity for knowledge creation as “going beyond the information given.” It is, he writes:

    …One of the few untarnishable joys of life. One of the great triumphs of learning (and of teaching) is to get things organised in your head in a way that permits you to know more than you “ought” to. And this takes reflection, brooding about what it is that you know. The enemy of reflection is the breakneck pace – the thousand pictures.

     Even the revised Bloom’s taxonomy replaces “evaluation” with “create” at the top of the pyramid (Anderson et al., 2001). Students in connected learning environments use the information they connect to take the leap towards creation; we need evidence of growing capacity to learn, think, innovate.

    Assessing Participation

    It has been decided (thus far only by me) that my assessment protocol must not only be pedagogically aligned but also flexible – flexible across disciplines, educational contexts, and instructor/peer/self use – and scalable.  Moreover a  connected assessment protocol, like connected learning, should use the affordances of the web.  Why? Because we are operating under Doug Engelbart’s principles of bootstrapping the collective IQ: we must solve this difficult task by using all of the resources available to us – and not just thinking about how to build a better standardized test.
    And so I have mapped out a framework, which is a starting place in this emergent assessment design. 
    Simply put:
    Learning Activities
    Participation Principle
    Operationalization
              Measurement   Tool
    Blogs
    Microblogs
    Establishing and Maintaining a Personal Learning Network (PLN)
    Contributor
    #Posts, Comments
    #Tweets
    Network Degree Centrality
    Excel
    Curating, and Critiquing, Data and Data Sources
    Contributor
    Content
    #Links
    Content
    #Links
    KBDex
    Excel
    Connecting or coordinating people and concepts over space, time, and spheres of learning
     Connector
    Ratio: Posts, Comments, Tweets
    Excel
     Connector
    #Links
    Classmate Mentions
    #Retweets, Mentions, Replies, Links
    Network Betweeness Centrality
    Excel
     Connector
    Content
    Links (Content)
    Content
    Links (Content)
    KBDex (Excel)
    Transforming data into new products
    Interpreter
    Content
    Content
    KBDex
    Product Assessment
    Sharing new product with PLN
    Contributor, Connector
    #Posts
    #Tweets
    Network Degree Centrality
    Excel
    Links (Content)
    Classmate Mentions
     #Mentions, Replies, Links
    Network Betweeness Centrality
    And so, once I get started on my dissertation, my tasks will be to (1) confirm that these analyses can be done; (2) confirm that they provide results approaching the meaningfulness of a full content analysis of the student work; (3) streamline the process so that it is flexible, scalable, and totally doable for the faculty. 

    Ok, that’s it for now.  This is very much in the early thinking stages, so all feedback is needed and very welcome.

    References

    Akkerman, S., Van den Bossche, P., Admiraal, W., Gijselaers, W., Segers, M., Simons, R. J., & Kirschner, P. (2007). Reconsidering group cognition: from conceptual confusion to a boundary area between cognitive and socio-cultural perspectives?. Educational Research Review2(1), 39-63.
    Anderson, L. W., Krathwohl, D. R., Airasian, P. W., Cruikshank, K. A., Mayer, R. E., Pintrich, P. R., … & Wittrock, M. C. (2001). A taxonomy for learning, teaching, and assessing: A revision of Bloom’s taxonomy of educational objectives, abridged edition. White Plains, NY: Longman.
    Bush, V. (1945). As we may think. The atlantic monthly176(1), 101-108.
    Campbell, G. (2014, May 26). Permission to wonder.  Retrieved from: http://www.gardnercampbell.net/blog1/?p=2285
    David, P. A., & Foray, D. (2002). An introduction to the economy of the knowledge society. International social science journal54(171), 9-23.
    Paavola, S., Lipponen, L., & Hakkarainen, K. (2004). Models of innovative knowledge communities and three metaphors of learning. Review of educational research74(4), 557-576.
    Salomon, G. (Ed.). (1993). Distributed cognitions: Psychological and educational considerations (Vol. 11, No. 9). Cambridge, UK: Cambridge University Press.


    *For those of you who are cringing, please understand that I come at this with mixed epistemological and methodological sensitivity.  I’m not a positivist.
    Advertisements

    2 Comments Add yours

    1. Is envisioning how a teacher might use these tools in the classroom domain, rough and ready, going to be part of your analysis. It seems to me that all the effort and results on the back end need to be plowed back in like an investment in the learner, i.e. if it ain't useful it ain't useable (or sustainable for that matter).

      It is a joy to see someone thrashing around with the question of making darkness visible. A joy.

      Like

    2. Laura Gogia says:

      Thanks, Terry. Facile application is always in the back of my mind. Early on in the process, my advisor and I had briefly discussed the possibility of coding variables, using SPSS on TAGS data…I sat down in front of my trusty and beloved SPSS and started transforming nominal data…and stopped. No teacher in their right mind – even those who love SPSS as much as I do – would transform data in real time for student assessment, nor could it be used for peer and self assessment. And so the SPSS got thrown out the window. Once I figure out what CAN be done, it will be time to take it to the teachers to find out what WILL be done, but the WILL is always in the back of my mind. Thanks for your support.

      Like

    Leave a Reply

    Fill in your details below or click an icon to log in:

    WordPress.com Logo

    You are commenting using your WordPress.com account. Log Out / Change )

    Twitter picture

    You are commenting using your Twitter account. Log Out / Change )

    Facebook photo

    You are commenting using your Facebook account. Log Out / Change )

    Google+ photo

    You are commenting using your Google+ account. Log Out / Change )

    Connecting to %s