Link Search Menu Expand Document

Project Assesment

The Erasmus+ AVATAR project evaluates XR-based learning effectiveness through interdisciplinary activities, using a culture of assessment that involves active student involvement and authentic tasks. The project employs various data collection methods to support monitoring and evaluation of the learning process. The evaluation process includes a questionnaire assessing perceived competencies, an assessment of student engagement, and feedback on expectations and satisfaction with the Joint Learning Lab (JLL).


Student evaluation

One of the objectives of the Erasmus + AVATAR project is to evaluate the learning effectiveness of XR-based approaches. When organizing an interdisciplinary activity, monitoring and evaluating the learning process differs from a standard assessment based on quantitative tests and performance evaluation. Indeed, there has been a transition from the traditional culture of testing to a culture of assessment that fosters the integration of assessment, teaching, and learning, through active student involvement and authentic assessment tasks. The assessment becomes an integral part of the project, as it positively affects student learning. Assessment is considered fair when it relates to authentic tasks, encourages knowledge application in realistic contexts, provides adequate feedback about students’ progression, and measures complex skills and qualities, among other features. Consequently, it is necessary to use different methods of data collection in the different phases of the project to support the monitoring and evaluation of the students’ learning process. A literature analysis was primarily carried out to detect which variables and tools to use to measure the effectiveness of the AVATAR approach as a learning method.

A recent review of project-based learning in higher education reports which student outcomes are usually evaluated and which tools are adopted (Pengyue,20203). The authors defined four macro-categories: “knowledge” (students’ content knowledge, conceptual understanding, and course achievement), “cognitive strategies” adopted by the students, “affective outcomes,” intended both as a perception of the usefulness of the project and how they perceived the learning experience, and finally what the author calls “behavioral results”, or the skills acquired, engagement and evaluation of what the students produce during the project. Affective outcomes are mostly studied, and self-reported measures are mainly applied. Furthermore, It is to be underlined that producing artifacts is an essential characteristic of project-based learning, despite artifacts are rarely evaluated.

As previously reported, the AVATAR setting did not allow an assessment of the increase in the students’ objective skills in the subjects covered during the project. Given the heterogeneity of the student groups, their different backgrounds, the different school settings and the different subjects studied in the universities, as well as other even less controllable variables, it was not possible to carry out an objective assessment of an improvement in the skills and knowledge of the students during the project, just as it was not possible to compare the group of AVATAR students with a control group. For this reason, we focused on improving of perceived skills in terms of hard and soft skills.


Questionnary assesment

First, the perceived competencies of the students were measured. Previous studies show that participating in learning programs that use advanced techniques provides students with positive experiences and makes them feel more confident in their abilities. Perceived self-efficacy, or how well a person feels able to achieve a specific goal, appears to be related to better learning performance. Since a standard tool for evaluating these skills in an industrial engineering student does not exist, an ad-hoc tool was created. Students filled in the questionnaire two times: at the beginning of the project (T0), before the online lessons and group projects, and at the end of the entire project (T1). The first section identifies the student’s academic background; in the other sections, the student had to indicate, for each of the skills listed, the degree of expertise from 1 (very poor) to 5 (totally agree). Two questions were adapted from Fieger (fieger, 2012), while the others have been established by teachers who are experts in the subject.

The students’ engagement was assessed at the end of the learning activities through a self-report questionnaire. The subscale of the SSSQ short stress state questionnaire (Helton, 2004), appropriately modified, was used to assess this aspect. Engagement is defined by the items referring to energy-alertness, motivation, and self-efficacy. In SSSQ, engagement is predominately a motivation factor, referring to readiness or willingness to act; for these reasons, it was useful to understand if the project has kept the students in high motivation and interest in the subjects. Students answered the same questionnaire at the end of the JLL to assess their engagement in JLL’s activities.


Other feedback from students: expectations and satisfaction

Another evaluation activity considered student satisfaction. In particular, we wanted to collect feedback from students on the AVATAR project, and the Joint Learning Lab. The evaluation in JLL2 and JLL3 differed; for this reason, the investigation methods used in JLL2 and JLL 3 separately are reported below.

JLL2

Questionnaires related to JLL Several aspects related to JLL were evaluated. First, before the week, each student answered an online questionnaire with open questions related to expectations on the JLL. The questions are listed below.

  • Which aspects do you think will be more positive about the experience?
  • Which aspects do you think will be less positive about the experience?
  • What will be the benefits of JLL for your knowledge?
  • What will be the benefits of JLL in general for you as a person?
  • Please choose 5 adjectives that you associate with the JLL experience

At the end of the JLL, students filled out another online questionnaire (open questions) to evaluate the expectations they had met and, in general, for an evaluation of the experience.

  • From the point of view of the knowledge of the subjects, you would say that -up to now- this experience has been… (3 adjectives)
  • Do you think some aspect of the experience contrasts with your learning of the subjects of the JLL2022?
  • Thinking more in general of JLL2022 as a life experience, you would say that - up to now - this experience has been …. (choose three adjectives and try to justify the choice)

Individual interviews

Furthermore, students participated in an individual interview to collect feedback related to their expectations, the difficulties encountered, the positive aspects, the suggestions, and the skills learned. During this individual interview, we asked a specific question on the AVATAR project to collect opinions and advice to improve the learning experience for the following year’s students.

JLL3

Individual interviews

Students participated in individual interviews during the last day of the JLL. the questions concerned the investigation of the positive and negative aspects of AVATAR project and online lessons (time of day, duration of lessons, topics, methods of conducting lessons) and of the JLL (organisation, proposed activity, laboratory visits, social aspects, most appreciated moment). Furthermore, when possible, suggestion from students were asked to improve the experience in a possible future project.



Table of contents