#Rhizo15 Week two – numbers, and a semi-organised flow of thoughts.

The quantified self

This weeks’ #rhizo15 theme has made me wander with my thoughs, at a point that I didn’t really know what to write, or where to start. But this is probably the most exciting challenge of the rhizome, that not only connects you with people and different views, but also takes you to reflective paths which make you question what you thought was a formed opinion. However, here is part of what I’ve been thinking about in relation to learning measures, facets of human experience we want to quantify and numbers, of course…

Pedagogy was born as “applied philosophy” in the Ancient Greece, so mostly a subjective, dialogic matter. However during the 8th century pedagogy acquired the status of “science” trough the tools of biology, psychology, sociology, using them to define its own aims and tools[1]. This until when, through various reinterpretations over the centuries, in the late 19th century pedagogy reached the point to define itself on strict experimental and empirical basis, often taking a reductive and anti-humanistic turn [2].

We have now passed that point though. However, while in the current academic contexts pedagogy sits between philosophic, scientific and critical paradigms, it seems that the scientific, measurable part still gets the upper hand. Especially with the use of emerging technologies in education, educators aim to “make learning visible” through these tools, which in part is absolutely great. I say in part, because have my own views on this matter, and these fall mostly in in favour of the dialectic, qualitative domain rather than the quantitative.

I’ve been reading a lot about “learning analytics” in the past few years. These have been defined as a

field associated with deciphering trends and patterns from educational big data, or huge sets of student-related data, to further the advancement of a personalized, supportive system of higher education. [3]

So what we are doing with these is essentially quantifying students’ learning and engagement looking at their grades and at how many times they viewed or posted on the VLE, to then personalise the system of higher education to increase these numbers(???). The problem is that we are “personalising” something (often a VLE, or a curriculum) for someone else, which per se is a strange concept. For example, see this presentation from Stephen Downes, where he makes the distinction between personal and personalised learning. This post nicely defines the concept:

Personalized learning, while customized for the student, is still controlled by the system. A district, teacher, company, and/or computer program serve up the learning based on a formula of what the child ‘needs’.

Shouldn’t we be allowing and supporting learners to develop personal learning landscapes, instead?

I think it is far too easy to equate meaningful participation, or learning, with numbers coming from analytics. @e_hothersall, @nlafferty and I have recently wrote a conference paper on a Twitter experience with medical students. We used SNA to look at students’ engagement, however it was quite clear that the number of tweets or mentions doesn’t account for the deeper processes of learning. They can offer an initial evaluation (and beautiful, colourful charts!), but without careful content (or discourse) analysis the portrait, in my opinion, is rather incomplete.

In medical education, but I’m sure not only here, metrics seem to prevale as objective ways to evaluate students, their participation, depth of learning, engagement. Sometimes we count whether and how many boxes they have ticked in their online portfolios, which should provide evidence of an achievement. This happens even with things such as empathy or emotions. Not only we aim to make them more explicit, but we want to do it in such a way that they can be measured. This is perhaps because doctors are increasingly held to account for qualities such as empathy and compassion. One consequence of this tendency has been, for example, the development of measurement scales; 38 different measurement scales for empathy, for instance, were described in a recent review [4]. The construct of Emotional Intelligence (EI), used within the medical academic environment to define a set of skills in which students are “trained” and then assessed for, serves exactly the same reason. Emotions are captured and measured from their instrumental use, which manifests itself in certain skills, behaviour and patterns of communication that can be learned, practiced, observed and evaluated.

This is what the psychometrics era brought in education. Measures to objectively evaluate and quantify students’ performance. But, where do the subjective and the collective fit?

This is an extract from a great paper by Brian Hodges:

The psychometric era brought not only the concept of reliability, but also other new concepts that gave credence to some practices and delegitimized others. The most important discursive shift was the negative connotation taken on by the word subjective. Framed in opposition to objective, the use of subjective in conjunction with assessment came to mean biased and biased came to mean unfair. [5]

I think we are slowly correcting this shift, and last week theme in #rhizo15 is the proof. Also, hybrid, critical pedagogies (see, for example @HybridPed) are surely highlighting the value of dialogic, unfixed, complex and dynamic elements, which cannot be quantified in education.

_______

P.s: As humans, though, we tend to quantify, even socially. Social Media tools have exasperated this tendency… Don’t we all get a sense of increased self-appreciation, when we get many retweets, many favourites, new followers, “likes” or comments on a blog post? Even more-or-less subconsciously, I think many look at these numbers, judging, at least initially, a person’s social media account from the amount of followers. These are numbers… but they get a (social) meaning.

References:
1 – Cambi, F. (2008). Introduzione alla filosofia dell’educazione. Editori Laterza.
2 – Striano, M. (2004). Introduzione alla pedagogia sociale. Editori Laterza.
3 – Horizon Report 2013
4 – Hemmerdinger, JM, Stoddart, SDR, Lilford, RJ. (2007). A systematic review of tests of empathy in medicine. BMC Medical Education, 7: 1-8.
5 – Hodges, B. (2013). Assessment in the post-psychometric era: Learning to love the subjective and the collective. Medical Teacher, 7: 564-568.

Advertisements

2 comments

  1. Lots of food for thought here Annalisa! It seems to me that there’s lots of jumping on band wagons at the moment in education, learning analytics is a classic example, lots of hype but lack of critical discourse and its the same with other things that are being hyped. People use jargon without seeming to understand what it really means as highlighted by your reference to Stephen Downes explaining the difference between personalised and personal learning.

    Love the quote from Brian Hodges, it reminds me of his chapter in ‘The Question of Competence’ which I think I need to read again. It also makes me wonder whether the GMC consider any of this as they think about introducing a UK national exam for medical students.

    Like

    1. Tanks for your comment, Natalie. I agree about the hypes and the lack of a practice based on theory and critical approach.
      I think the competences approach is probably what is leading the GMC regarding the NLE… The question is: will they be assessing excellence or average standards?

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s