Summarizing "Research Questions and (Better) Learning Analytics"

Today I joined Audrey Watters of Hack Education, Andrew Sliwinski at Mozilla Foundation, Justin Reich of HarvardX and Berkman Center, andVanessa Gennarelli of P2PU for their lunchtime discussion of Research Questions and (Better) Learning Analytics. A lot of the discussion was applicable to educational assessment and social learning environments in general, sprinkled with commentary on learning analytics, ed tech, and MOOCs specifically.  It was more synthesis and less discovery for me – but it was validating to hear “experts” say many of the things I thought I was reading in their blogs and articles.  Even so, it was a reminder of just how tricky educational assessment really is. I’ve embedded the video below but just in case you don’t have an hour, here are some of the salient points I wrote down between bites of edamame and basil patties and broccoli (tasty, vegan, and ridiculously high in Vitamins A, C, calcium, and fiber – as all lunches should be).

  • The entities represented in this talk operate with the underlying assumption that learning is social: People learn better when they learn together. These organizations need to be able to test that hypothesis, and learning analytics may be one way to do that.
  • Instructional design and learning experiences are shaped by how we measure learning and evaluate programs.  The problem is that we tend to measure what is easily measured – which is almost never what we really should be measuring.  For example it’s very difficult to measure disposition but very easy to measure recall.  So which do you think we measure – ALL THE TIME?
  • Also consider the impact of platform – if an instructor is using a platform that spits out learning analytics and this is the basis of assessment…what does this do to student choice?  If the platform can’t measure participation in the form of Soundcloud or YouTube, I guess this means the student can’t use those??
  • Measurements of student engagement are often used as proxies for measurements of learning – and there is some data to support that conflation.  Maybe that’s ok.
  • We are still learning how to measure engagement.
  • The current educational environment – and society and government – assumes that the answers are always in the data…if we have enough data, we will have the answer.  But is learning really a science?  Is that an assumption? (as an aside, there seemed to be consensus that a killer pearl probably does exist somewhere in the really big scary ocean that is big data).
  • Not all analytics are the same.  Google analytics and learning analytics are only one word away but are very different.
  • Don’t conflate ED tech with AV tech.  The design should drive the tech, not the other way around.  Unfortunately educational research is starting to go the same way…the “easy data” is starting to drive research designs.
  • “We build and teach what we can measure.”  (That’s uncomfortable to think about, isn’t it?)
  • One major point of the project-based learning movement is to break down disciplinary silos – using origami to teach geometry.
  • But one problem with project-based learning is that it is based on engagement metrics – a feedback loop that tends to optimize what is already being done rather than assessing whether or not they should be building something that looks entirely different.  
  • We need to be careful about how we present learning analytics to the individuals in classes.  What do you do with the information that you have only a 38% chance of finishing a class?  That information can impact people very differently.
  • “It’s just SO not romantic – the quantified self.” –Vanessa Gennarelli.  Best quote of the day.
  • ETHICS.  It’s the last twenty minutes of the video.  Educators have done a horrible job of explaining to students how their data is already being used to impact institutional decisions – how many times they eat in the dining hall, which buildings they enter after hours, how many classes they registered for last semester. 
  • Users need to know what information is being collected and how that data might eventually be of value to the student.  For example, in medical contexts, people allow huge amounts of personal data to be collected – they do so because they have an underlying assumption that providing that data will ultimately lead to a more accurate diagnosis or better health in the future.  Educators don’t have enough information right now to be able to make the same claims – but they need to get there.


2 Comments Add yours

  1. Hi! I was surfing and found your blog post… nice! I love your blog. Cheers!
    Learning Analytics


  2. Now you can get thorough summarizing online help you can trust from our summarizing services!



Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s