Stitching the Diss: Research on Assessment in Higher Ed Online Discussion Forums

I’m sure this will become a mere paragraph in my dissertation, but for now I think it has use in it’s long form for the #ccresearch team as we orient ourselves to the literature.  All of these articles are organized in my personal Zotero.  Bear with me – I will make that public as soon and link it (somehow) to this blog so you have the entire reference.  Until then if there are specific references you want – comment and I will pull them over (this may get me to make my Zotero public sooner rather than later…) The examples given are just examples – not all articles are represented here but all of the categories/themes are.

What types of learning processes and products are currently being captured in higher education, online, discussion-based settings? And how are they being captured?
I searched ERIC, Academic Search Complete, and Educational Research Complete with the search terms “online discussion” and “Universities OR Higher Education” and “Grading OR Assessment” to find a total of 159 articles.  When duplicates and irrelevant articles were removed, 87 remained.
The studies represented could be divided into those that:
·         Aim to design assessment tools for online discussion groups.
o   (Wei-Ying, Chen, Hu, 2013)
o   (Tomkinson & Hutt, 2012)
o   (Whatley & Ahmad, 2007)
o   (Wu & Chen, 2008)
o   (McConatha, Praul, & Lynch, 2008)
o   (Miniaoui & Kaur, 2014)
·         Aim to evaluate online discussion groups as a format for learning. (In other words, the assessment tool is not the focus of the article) 
o   (Kemm & Dantas, 2007)
o   (Fox & Medhekar, 2010)
o   (Anderson, Mitchell, & Osgood, 2008)
The assessment tools described in these studies are meant to measure the following indicators:

·         Content Acquisition 
o   (Burrow, Evdorides, Hallam, Freer-Hewish, 2005)
o   (Carroll, Booth, Papaioannou, Sutton, & Wong 2009)
o   (Kemm & Dantas, 2007)
o   (Kerr et al., 2013)
o   (Miers et al., 2007)
o   (Nevgi, Virtanen, & Niemi, 2006)
o   (Porter, Pitterle, & Hayney, 2014)
o   (Shephard, Warburton, Maier, Warren, 2006)
o   (Whatley & Ahmad, 2007)
o   (El-Mowafy, Kuhn, Snow, 2013)
o   (Fox & Medhekar, 2010)
o   (Mafa & Gudhlanga, 2012)
o   (Lai, 2012)
o   (Choudhruy & Gouldsborough, 2012)
o   (Moni, Moni, Poronnik, & Lluka, 2007)
·         Discussion Process
o   (Caballe, Daradoumis, Xhafa, & Juan, 2011) *qualitative analysis*
o   (Luebeck & Bice 2005) – *qualitative analysis*
o   (Moni, Moni, Poronnik, & Lluka, 2007) – *qualitative analysis*
o   (Rovai, 2007) – *rubric driven qualitative analysis*
o   (Choudhury , & Gouldsborough, 2012) *quantitative analysis* for participation
o   (McNamara, & Burton, 2009) – *quantitative analysis* for critical thinking skills
·         Emotional State of the Student Participants
o   (Hughes, Ventura, & Dando, 2007)
·         Critical Thinking
o   (Matheson, Wilkinson, & Gilhooly, 2012)
o   (Wei-Ying, Chen, Hu, 2013)
o   (McNamara, & Burton, 2009)
The assessment tools were controlled by the following people:

·         Self-assessment
o   (Damico & Quay, 2006)
o   (Matheson, Wilkinson, & Gilhooly, 2012)
o   (DeWeaver, Van Keer, Schellens, & Valcke, 2009)
·         Peer-assessment
o   (Davies, 2009)
o   (McLuckie & Topping, 2004)
o   (El-Mowafy, Kuhn, Snow, 2013)
·         Combinations of self- and peer assessment
o   (Beres & Turcsani-Szabo, 2010)
o   (Biblia, 2010)
·         Instructor driven (Majority)
o   (Kerr et al., 2013)
o   (Miers et al., 2007)
o   (Porter, Pitterle, & Hayney, 2014)
o   (Rooij, 2009)
o   (Shephard, Warburton, Maier, Warren, 2006)
o   (Smith et al., 2013)
The assessment tools consisted of these formats:
  • ·            Content Test or Quiz

o   (Burrow, Evdorides, Hallam, Freer-Hewish, 2005)
o   (Carroll, Booth, Papaioannou, Sutton, & Wong 2009)
o   (Miers et al., 2007)
o   (Porter, Pitterle, & Hayney, 2014)
o   (Shephard, Warburton, Maier, Warren, 2006)
o   (Smith et al., 2013)
o   (Hardy et al., 2014)* Student-generated multiple choice questions*
o   (Nevgi, Virtanen, & Niemi, 2006) *Student-generated multiple choice questions*

·         Reflective essays
o   (Damico & Quay, 2006)
o   (Estus, 2010)
·         Rubrics
o   (Rooij, 2009)
o   (Wei-Ying, Chen, Hu, 2013)
o   (Whatley & Ahmad, 2007)
o   (Wyss, Freedman, Siebert, 2014)
o   (Lai, 2012)
o   (Anderson, Mitchell, & Osgood, 2008)
o   (Rovai, 2007)
·         Authentic Performance Assessment
o   (Smith et al., 2013) – instructors observed students performing online interviews
·         Blog Scraping
o   Yong Chen, 2014
Final thoughts: 
In 2004, an author claimed “We make no claim here that assessing e-learning is really radically different from assessing learning – the same principles apply.” (MacDonald, 2004, p 224)
However, by 2013, Liburd & Christensen wrote: “Web 2.0 is a radically different way of understanding and co-creating knowledge and learning, which has a range of implications.”  The literature is shifting to suggest that e-learning requires radically different assessment practices (Dalelio, 2013) because assessment is a fundamental driver of how and what students learn (Khare & Lam, 2008; Ross et al., 2006).  In other words “the purpose, criteria, and intended outcomes of assessment must be established” (Gayton & McEwen, 2007).

Unfortunately there is a feeling in the literature that “criteria for assessing discussion skills remain unclear” (Lai, 2012).  
This is very clearly a gap.
Advertisements

4 Comments Add yours

  1. Thanks for sharing your micro lit review. As a fellow doc student/dissertator I appreciate the way you grouped & labeled these resources. You've sparked some thoughts I'll put to use in my own workflow.
    Karen

    Like

  2. Laura Gogia says:

    Thanks Karen!
    My doctoral program is research and evaluation, with an emphasis in research methodology and instrument development – this might help explain why my classification orientation is more focused on how people opted to frame their research rather than the results of the studies themselves.

    Like

  3. Hi Laura,

    I keep meaning to tell you about Chris Dede's marvelous suggestion that we study Web 2.0 communities with an eye to how (whether?) they (might) conduce to wisdom. It's a piece in Educational Researcher devoted entirely to Web 2.0, as I recall.

    Open access FTW: http://dash.harvard.edu/bitstream/handle/1/4901642/Dede_response_Greenhow.pdf?sequence=1

    Chris's recent work resonates very strongly with me.

    Like

  4. Laura Gogia says:

    It's an interesting article and that I suspect will be very useful as I contemplate what I hope to assess in my dissertation. Right now I am thinking through ways to capture “connection” and “connectivity” through a study of how participants use links, mentions, replies, retweets. The links, in particular are very useful – are they linking to each other's work? The instructor? To outside sources? If so, what sort of outside sources? I'm trying to get to the deeper language of digital literacy as a valuable outcome of connected learning and online coursework. Thanks for the article and the idea of another search term – Web 2.0

    Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s