When it comes to self-assessment for PA Programs, it’s not uncommon for PA faculty to find it challenging. It’s important for faculty to have clarity in their goals and understand the steps being taken to achieve them; even more, it is vital to assess how well these steps are working and if any need to be revised. This is to not only ensure the PA Program is working optimally but also ensure the C standards from the ARC-PA are being met.
For many years, I believed that when programs saw very high pass rates on the PANCE for first time takers, the work was effectively complete; the program’s methods obviously are working and, thus, don’t need to be validated. Now I know nothing could be further from the truth.
The ARC-PA has been increasingly looking toward an integrated approach to validating a PA program’s compliance with the C standards. The requirements related to the 5th Edition of Standard C1.01 can be difficult to understand, however:
“The program must define its ongoing self-assessment process that is designed to document program effectiveness and foster program improvement. At a minimum, the process must address: e) PANCE performance.”
Self-Study Report, Appendix 13 F
Relatedly, the new Self-Study Report (SSR) Appendix 13 F (also from the 5th Edition standards) lists several guidelines spotlighting necessary data analysis:
“Data analysis related to PANCE outcomes is to include, but not limited to, correlation of PANCE outcomes and the following:
Admissions criteria as a predictor of success
Course and instructor evaluations by students
Program instructional objectives, learning outcomes, and breadth and depth of the curriculum
Student summative evaluation results
Student progress criteria and attrition data
Feedback from students who are unsuccessful passing PANCE
Preceptor and graduate feedback”
Understanding these standards is vital to carrying them out. In order to gain increased clarity of the requirements, let’s break down each of these items and look at how to comply with them.
Admissions Criteria as a Predictor of Success
The guidelines in the Appendix 13 F make clear there is a correlation between admissions criteria and success on the PANCE. This correlation is affected by the PA Program’s minimum prerequisite requirements. For many programs, this includes prerequisite course GPA, prerequisite science GPA, GRE, healthcare experience and cumulative undergraduate GPA.
By using your program’s most recent graduating class and their subsequent PANCE scores, you can stratify them to run a reverse analysis on the students’ performance on these variables. Regarding students who did not pass on the first attempt, I recommend conducting a mini analysis for each of the students. Note any differences on the variables between those who passed the PANCE and those who did not.
If no students failed, stratify the PANCE scores, then look at the performance of students who scored below 400. This represents individuals who were performing on the margin, providing some validity in looking at the prerequisite performance in these individuals versus the entire population.
In addition, certain simple parametric statistics could be applied as well. For example, the Pearson correlation between each of the prerequisite elements and the PANCE scores would provide some helpful insights. Of course, this only provides a perspective regarding the strength of relationship. Ideally, you might find a linear relationship in those variables that strongly correlate with PANCE. This process could, in fact, be retrospectively applied to previous graduating classes. This data might provide evidence that the points assigned to specific prerequisite requirements are being overinflated.
In order to understand how course outcomes relate to performance on the PANCE, it becomes necessary to look at more granular information, such as the systems performance or the task area performance on the PANCE. One place to start is with the overall grade profile of students who do not pass the PANCE on their first attempt. This looks for trends during each respective student’s academic record in the program. Consider if there was a preponderance of lower grades in these individuals. What was the overall percentage of performance in specific basic medical science or clinical science courses?
In order to take a macro approach, begin identifying organ system areas that are below the national level. Then identify what blueprint topics are covered within each respective class. Low performance in a respective class might be the result of a change within a class, such as a new instructor. Perhaps the program has not mapped the NCCPA blueprint for a period of time, which has resulted in topics being dropped from individual classes.
A few simple parametric statistics could also be employed, such as the Pearson correlation between each of the program’s course and the PANCE scores. Again, this only provides a perspective regarding the strength of relationship. Ideally, you might find a linear relationship between the PANCE scores and performance within a specific class. This class’s aggregate performance can then be correlated against performance on the PANCE in order to determine the correlational strength. If you have specific classes that are believed to be a good predictor of future PANCE success, you can expect the correlation will be strong.
The program faculty must take an investigatory approach to each of these elements in relationship to PANCE. Bringing Clarity to the SSR: Part 2 presents a method to analyze the results of the summative exam as it relates to PANCE performance.