The Me2U project that’s investigating the use of Echo360 personal capture is – as far as data collection is concerned – entering its final phase.
We have been going along to the teaching sessions for those courses that have been using Echo360 PCAP to distribute questionnaires to students and invite them to participate in focus groups to find out more about their experiences of watching the screencasts. Given that this work has taken place within the space of a couple of weeks, we now have a pile of questionnaire data to input…then analyse…then interpret in relation to the other data we have collected.
I’m not complaining – without the data we couldn’t say very much about what the students think of the recordings that the lecturers have invested their valuable time in creating. Its just that at this moment, staring at the tower of paper, its a little daunting to think how all the different pieces will fit together to form a coherent story about the extent to which the screencasts have helped students to put their learning into context.
A criticism of research into learning developments is that they sometimes don’t tend to go beyond a conclusion of ‘the students liked what the teachers did’. It is often very difficult to gauge the effect of an intervention on student learning given the multi-factorial influences on a student’s experience on a course. There have been numerous discussions, research articles and reviews about what constitutes ‘evidence’ in educational developments. It is very tricky, for example, to have control and experimental groups in this field, although there are rare examples of studies where this approach has been taken.
So what about the Me2U project? We are using four sources of data:
- Questionnaire data from students
- Focus group data from students
- VLE log data on student access to the screencasts
- Interviews with staff
The questionnaire data will answer the ‘did the students liked what the teachers did?’ question (the answer to this will almost certainly be positive, as students like resources). However, it will also help us to understand more about which aspects of the students’ learning the screencasts supported, how often they viewed recordings and on which devices the recordings were viewed. It will also allow us to see if there is any relationship between a student’s technical confidence and whether they watched the screencasts. The focus group data will help expand on our findings from the questionnaire data and enable us to better categorise the main areas of learning that these short screencasts can support.
I have talked about the use of VLE logs in a previous posting. We could look at the extent to which individual students have accessed the recordings and their performance in assessment tasks (or compare a current cohort’s performance with a previous year). However, as I mentioned above, the factors that will affect these variables would swamp any potential effect of screencasts.
Instead, I see the logs as contributing to our understanding of when it is optimum to release the screencasts to students and the type of screencasts that were viewed most often by students. This will help to promote the use of short screencasts with staff – given their time is in such short supply, they will want to know how to make sure the students get optimum use of the recordings.
Finally, the interviews with staff are critical. We all know that the academic staff are more likely to be persuaded by their peers than by those of us in learning development. If our participants discuss their use of Personal Capture and are willing recommend it to their colleagues then the barriers to the uptake of screencasts will be lowered.
What I hope is that the project will be able to provide some sensible recommendations – that are supported by evidence – to staff about effective deployment of screencasts to support student learning The next couple of months will reveal whether or not this can be achieved…