Student surveys for course evaluation
This course evaluation questionnaire was developed as a tool for informing course teachers at the Department for Biological Sciences (BIO) in relation to learning design and instructional decisions.
Items 1-20 make up the main part of the student survey questionnaire and originate from the Constructive Alignment Learning Experience Questionnaire (CALEQ) (Fitzallen et al. 2017). These items are concerned with relationships between learning outcomes, teaching and learning activities, assessment and feedback. In developing the questionnaire, the wording of CALEQ items 5, 10 and 12 were slightly modified from the original version.
Additional items originate from subcales of the Course Experience Questionnaire (CEQ), namely the Appropriate Workload Scale (AWS) (Ramsden 1991), the Generic Skills Scale (Wilson et al. 1997) and overall satisfaction (Ramsden 1991). Some unvalidated items are included to provide additional information on student experiences (items 21, 36, 37) as well as student engagement in the course (items 33-35). Finally, an open-ended question gives students the opportunity to leave comments.
Distributing the questionnaire in electronic form as well as gathering responses can be done using a web-based survey tool. The University of Bergen has shared licences for SurveyXact (Ramboll, Denmark) and this service provider is also used to distribute and gather electronic survey forms at BIO.
The following participants are involved in surveying courses at BIO.
- Course students represent the informants in course surveys and participate on a voluntary basis.
- Course teachers are the intended primary users of the information gathered in the questionnaire.
- Local administrative staff upload, manage and distribute digital surveys. They also process and store gathered data. Initial data management includes screening of open-ended responses for any offensive comments to be removed.
- The Data Protection Officer plays a supporting role in ensuring that legal requirements for student and staff privacy are met.
Background - Student surveys
Course evaluations can serve a variety of purposes. This questionnaire was developed for informing course teachers on how students experience key course aspects for inferences of learning quality. Implementing this questionnaire in other contexts should be accompanied by conversations among course teachers, students, and administrative staff for a shared vision on its purpose since contrasting views can undermine its applicability for teaching and learning quality (Edström 2008). Furthermore, student surveys have limited value for quality enhancement unless coupled to specific measures and actions that allow course teachers to adapt course design and teaching using gathered information (Ramsden 2003).
The student survey questionnaire developed at BIO consists mainly of items that are answered using ordered Likert-type scales. Although this is a common survey approach for course evaluations, it is also possible to employ qualitative methods such as open-ended survey items, group discussions or focus group interviews. Numerical and qualitative methods have dissimilar advantages and are not mutually exclusive. Typically, numerical methods are easier to employ for large numbers of students and for comparisons across years, while qualitative methods can provide educators more nuanced and in-depth information on student experiences in the course. Although these types of methods can be applied together, combinations typically require larger efforts in planning, implementation, and interpretation in compliance with privacy regulations.
Formulating survey items
Generally, Likert-type items are more informative when they have undergone a validation process. The outcome of validation can indicate the extent to which survey items measure the properties they are intended for and benefits typically from multiple iterations in a variety of contexts. Although there are several benefits of educators developing items to be included in student surveys, our recommendation is to subject such items to a validation process that follows principles for psychometric scale development.
Responses and reliability
Questionnaires are generally considered as powerful tools for course evaluations due to ease of distribution, data gathering and processing for large numbers of students. Despite these advantages, the method is also susceptible to diverse sources of error. While eliminating all potential sources of error is not possible, awareness of the factors yielding errors can still reduce their influence and thus enhance the reliability of gathered information.
Students typically attend multiple courses that run in parallel in the same semester. Thus, students may be less inclined to answer several surveys in the same period, leading to reduced response rates. This can be ameliorated by educators dedicating specific time for students to answer surveys within regular teaching hours.
The reliability of answers obtained from students can be challenged by same method bias, involving patterns in the way students reply to survey items. Such patterns can appear if survey items appear in a predictable order, for example when items concerning similar course aspects are grouped together in the questionnaire. The risk for same method bias can be reduced by randomising the order in which the survey items appear in the survey user interface. Typically, web-based survey tools feature this option.
Educators can choose among a variety of survey questionnaires from available literature, including items that are directed at the performance of teachers and teaching assistants. In general, student responses to items concerning teaching staff are subjected to factors that say very little about learning, including teacher gender, ethnicity or the weather conditions when surveys are taken (see Roxå et al. 2022 for an overview). If the given purpose of a course evaluation is to make inferences about student learning, our recommendation is to avoid items that are directed at teacher performance.
Course evaluations can yield extensive data series that can be used in relation to educational development on course- and programme level. Although web-based survey tools offer cloud storage services, our main recommendation is that higher education institutions should have their own systems for data management including long-term storage. Such systems should be able to accommodate
- removal of personal information from raw data
- removal of offensive comments directed at individuals
- data storage in local archive
- established procedures for metadata
- clear policy for data use, including access and administrator permissions
Edström K (2008) Doing course evaluation as if learning matters most. Higher Education Research & Development 27(2), 95-106
Fitzallen N, Brown N, Biggs JB, Tang C. (2017) Students’ perceptions of constructive alignment: validation of a data collection instrument. In: Proceedings of the International Conference on Teaching and Learning in Higher Education 2017, Kuala Terengganu
Ramsden P (1991) A performance indicator of teaching quality in higher education: the course experience questionnaire, Studies in Higher Education, 16 (2), 129–150
Ramsden P (2003) Learning to Teach in Higher Education. 1. utgave, Routledge, London. p.217
Roxå T, Ahmad A, Barrington J, van Maaren J, Cassidy R (2022) Reconceptualizing student ratings of teaching to support quality discourse on student learning: a systems perspective. Higher Education 83, 35–55
Uttl B, White CA, Gonzalez DW (2017) Meta-analysis of faculty's teaching effectiveness: Student evaluation of teaching ratings and student learning are not related. Studies in Educational Evaluation 54, 22-42
Wilson KL, Lizzio A, Ramsden P (1997) The Development , Validation and Application of the Course Experience Questionnaire. Studies in Higher Education 22, 33-53
- Christian Bianchi Strømme
- Lucas Matias Jeno
- Jorun Nyléhn