When newer PA faculty first receive their faculty and course evaluations, it’s common for them to feel discouraged by any negative comments, low scores, or an overall lower evaluation than expected. It’s important for them to understand how to interpret and utilize the feedback—good and bad—in a productive manner.
If you are new PA faculty, we recommend reading through our breakdown below regarding how to comprehensively understand and use your evaluation scores.
If you are a PA program director, we recommend sharing the following information with all of your preceptors, particularly newer ones receiving their first evaluations.
The purpose of assessments
When assessment is done right within a PA program, the result is multiple layers of feedback that map out where changes need to be made. Ultimately, assessments are intended to improve a course; they are never meant to beat down PA faculty.
Using assessments to improve courses
To use assessments as a way to improve courses, we will review a comprehensive and holistic evaluation of the Clinical Medicine course within a typical PA program faculty self-analysis of the course.
Here, student evaluations of faculty performance can be incorporated into a reflective analysis of course policies and procedures, as well as pedagogical delivery. Document the analysis by recording it onto a template; this template then can serve as a record of modifications that were a result of the process.
Look beyond the numbers
We understand what it’s like to put in the work and effort to make a course excellent, only to be disappointed by feedback within the evaluations. To combat feelings of despair, remember the numbers don’t always reflect your effort.
Scores and comments (both positive and negative) can reveal themes and be informative regarding what needs to change or stay the same. Every PA professor has received at least one negative comment; if it’s an outlier, take it with a grain of salt. If the comment seems repeated by at least five students, take it as a signal that something truly does need to be addressed and changed.
Student evaluation of course and faculty below benchmark
If you’ve scored below the benchmark on your student evaluations, first examine how learning and outcomes are impacted based on performance. Less than optimal instruction and performance can have a ripple effect on student performance in the long run. Considering how to change or improve your performance is a key part of conducting a holistic evaluation of course outcomes.
PACKRAT, EOR, and PANCE-related applicable data below benchmark
Since we’re focusing on clinical medicine, look at the organ system performance and the nationally standardized examination. For example, if the clinical medicine course covers cardiology, pulmonology, and gastroenterology, then review program performance in these specific systems. This will reveal whether any modifications need to be made on instruction.
Performance on nationally standardized exams does not always result from a specific class. Still, it is important to map the NCCPA blueprint and review the depth and breadth of the content.
Although the connection between a specific class and the preceptor evaluations of students may be far-fetched, they still need to be watched. For example, regarding student preparedness within the preceptor evaluation of student, consider performance below benchmark in the area of knowledge.
Or, for another example, if the course covers diagnostic methods and interpretation, and if the student’s performance on diagnostic test interpretation is rated below benchmark, there may be a connection worth pursuing.
Admissions variables and prerequisite admissions performance related to course performance
If a larger number of students struggle in the clinical medicine class than usual, look for relationships in the students’ admissions performance. For example, a closer examination of each of the students who received grades of C or below should include an analysis of their science GPA and performance in basic science courses. On a macro level, these correlations have been less than reliable, but considering individual case studies related to student performance may be merited when looking for possible and worthwhile trends.
End of didactic phase program learning outcome surveys
To find areas where instruction within specific courses needs to change, look at areas performing below benchmark. For example, if students perceive the basic clinical sciences are below benchmark, the basic medical science need to be closely analyzed. In addition, look at specific task areas such as diagnostic tests, pharmaceuticals, and history and physical examination; these can be traced back to specific coursework for further evaluation exist survey. Student perception about specific competencies should be traced back to instructional areas.
Remember, trend analysis is important. Persistent performance below benchmark should trigger an in-depth analysis of the didactic year courses directly attributable to that area of competency.
Measurement of program-defined competencies
The 5th Edition Standards the set of the references measuring program defined competencies. Once the program competencies have been established, they must be benchmarked and mapped back to specific courses. Triangulation of perceived lower performance in specific competencies should trigger an analysis of applicable curriculum.
Parametric data related to specific courses
Course performance can also be indicated by the strength of correlation between key foundational courses, such as clinical medicine and subsequent nationally standardized examinations. This means looking at course performance in the specific courses and performance on exams like PACKRAT and PANCE. A strong correlation suggests the assessment methods within the course seem to discriminate intensely. It also identifies which courses are the most predictive of subsequent performance.
This overview of assessment methods related to course performance demonstrates the complexity of a dynamic assessment system. Student evaluation of faculty performance is important for many reasons. It is one of many data points that must be evaluated annually regarding course content and performance. On the other hand, less than desirable performance within the class may not be related to any of the areas covered here.
If you are a PA program, we highly recommend discussing these points with any discouraged faculty members. It is beneficial for them to see the context of these numbers. In addition, it provides a better understanding for them of how a comprehensive analysis determines the adequacy of instruction. At the very least, building pedagogical skills means improving instruction through faculty development.