by Junko Winch
Module Evaluation Questionnaires (MEQs) are an important source of student feedback on teaching and learning. They are also often relied upon as evidence cases for promotion and teaching. However, in their current form they suffer from low response rates reducing their usefulness and validity. Local practices have grown to address the need for feedback but they are inconsistent year on year or across the university. Existing research on teaching evaluations indicates that there are a source of bias and suggests careful design of MEQs.
The MEQ project
The MEQ project was undertaken to inform the University of Sussex’s policy and practice. The output was presented to the University’s Surveys Group for their strategic direction of University of Sussex.
Literature review
The literature review revealed tutors’ and students’ biases related to MEQs. However, bias is a source of unreliability, which also threatens validity. Validity and reliability are defined in various terms, but for the purpose of this report, validity is defined as “the general term most often used by researchers to judge quality or merit” (Gliner et al., 2009, 102) and reliability as “consistency with which we measure something” (Robson, 2002, 101).
The findings and recommendations
1. The purpose of MEQs
MEQs have three purposes: institutional, teaching and academic promotion. To help to reduce the bias effects outlined in the literature, full MEQs and other teaching related data should be provided to promotion panels to avoid the cherry picking of comments or data by applicants. For example, quantitative data such as class average attendance rate, average, minimum and maximum marks as well as qualitative response analysis would help build a more accurate overall picture of the class.
2. Analysis of MEQs
Students’ biases mentioned in the literature may present difficulty in relying on MEQs as sole instrument. Furthermore, the current MEQ statements may confuse students due their contents and wording.
Following points are suggested:
- Purpose and goal of the questionnaire should be clearly stated. The purpose of the stakeholders should be taken into account when designing the MEQs to ensure that the intended MEQ purpose is achieved.
- Some statements ask two questions in one statement. However, some students may not necessarily answer both questions, which affect validity.
- Consideration should be given to the words such as ‘satisfied’ which might have different connotations depending on cultures and individuals.
Recommendations
Carefully developed MEQs have potential to offer valuable insights to all stakeholders. The primary recommendation is to undertake a staff-student partnership to agree the purpose of the MEQs and co-design a revised instrument that meets the stated purpose.
Reflections
I have engaged this project as my CPD and appreciate that it has given me various opportunities. For example, I was given an opportunity to write this blog. Furthermore, giving a presentation to the University Surveys Group reminded me of my doctorate viva as the University Survey Group included Pro Vice Chancellor for Education and Students, Associate Dean of the Business School and the Deputy Pro Vice Chancellor of Student Experience. When answering questions from the University Survey Group, I learned how difficult it is to meet the needs of different perspectives and cultures. For example, I was asked a question from a quality assurance perspective, which was unexpected as I wrote this Report from a teaching staff perspective. The University Survey Group also included Students’ Experience team which also made me consider another perspective involving MEQs. Furthermore, working with my colleague from the Business School made me realise the departmental/academic discipline’s cultural differences from where I am affiliated (School of Media, Arts and Humanities). Looking back, this was a very valuable experience for me and I will recommend any colleagues who wish to join the DARE Scholarship programme to undertake a similar project.
References
Carrell, S. E., & West, J. E. (2010). Does professor quality matter? Evidence from random assignment of students to professors. Journal of Political Economy, 118, 409–432.
Gliner, J.A., Morgan, G. A. & Leech, N. L. (2009), Research Methods in Applied Settings–An Integrated Approach to Design and Analysis, N.Y., Routledge.
Patrick, C. L. (2011). Student evaluations of teaching: effects of the Big Five personality traits, grades and the validity hypothesis. Assessment & Evaluation in Higher Education, 36(2), 239–249
Robson, C. (2002). Real World Research –Second Edition, Oxford, Blackwell.
Hi Junko,
Thank you for an engaging blog post. I read it with much interest and I agree with your analysis of how MEQs are designed and methodologically valid (or not). I am sure this is not an extensive description of the work you’ve done but I have a couple of questions;
It’s not clear where you picked up the student bias from; is this part of your own research or from Patrick (2011)?
Can you draw any parallels with the NSS discussion and analysis made by Bell and Brooks (2018) about what makes student satisfaction?
Hi Alexandre,
Thank you very much for your prompt comments and questions.
The student bias figure was summarised from various references and they are not my own research nor Patrick (2011). Unfortunately, I was advised to keep references to the minimum. If you are interested, I’m happy to send those references to you.
Yes, I agree that it would be very interesting to do parallel with NSS discussion and analysis by Bell and Brook (2018). Thank you very much for your advice. I may develop this following your recommendations.
Hi Junko,
Thank you for this blog and I am doing some research around how important the MEQs are, and you have said “They are also often relied upon as evidence cases for promotion and teaching.” Do you have have data around this and it would be also great if you can send over your your full research around this which would be of great help.
Thank you