Evaluation of training in systemic awareness - study design

Our first research endeavour is to measure the effects of the training in systemic awareness that we developed for medical bachelor students. Effect in the sense of whether the students liked the training and thought it was suitable and useful and effect in the sense of whether the training did where it was designed for: increasing systemic awareness and improving personal and team functioning. The first ‘effect’ is called reaction or process evaluation and the second ‘effect’ is called outcome evaluation. As outcomes, we hypothesised that the training would increase systemic awareness, and thereby increase perceived personal and team functioning.

The best way to measure the effect of the training would be to set-up a randomised study with a measurement before and after the training to register the difference. To be sure that it is the training that causes the effect and not a different factor, such as other trainings, an event, or the weather, you need to have a parallel group of students who do the training in systemic awareness (the intervention group) and another group (the control group) who gets another training or receives no training. You only expect an effect in the intervention group. Preferably all groups are equal in size and in characteristics (equal distribution of female/male, age distribution, etc), so you are sure that the effect is not because one group consisted of preferable candidates. Preferably the participants and the trainers do not know the aim of the study, so they cannot influence the outcome of the study in a conscious or unconscious way.

However, in an education setting, where the training is part of the normal curriculum and compulsory for the students, this study set-up is not possible. We cannot have different trainings in parallel and withhold training to part of the students. Nor can we randomly allocate the students over different trainings. Also, for ethical reasons we need (and want!) to inform the trainers and the student about the research. So, we have to deal with these limitations and perform the research in a natural educational environment, so to say.

To study the feasibility and effectiveness of the training we used the following study design: An online questionnaire was sent out to all 3rd year medical students, the majority of the students had to attend the workshop and a small part was abroad for a project and could not attend the workshop. By including the students who were abroad, we had a sort of control group of students who did not do the training. However, these students were, off course, not randomly assigned. Also, being abroad might have an effect on personal and team functioning. The questionnaire was send out in four waves: before the training (pre-test), right after the training (post-test) and 2 and 6 weeks after the training (see figure below). In the post-test we assessed liking, reaction and feasibility. This included statements like ‘the workshop gave me useful insights’ and ‘the workshop set-up was clear to me’. We expected that improvement of systemic awareness, personal functioning and team functioning would take some time. Therefore we scheduled the follow-up questionnaires at 2 and 6 weeks after the workshop. Perhaps a longer period is necessary to measure an effect, but extending the period was not feasible at this time, because the students went for summer break and we wouldn’t be able to reach them.

We are currently collecting the last part of the data and we will start the analysis of the data soon. In my next Blog I will tell more about what we measured and about the content of the questionnaires.