Skip to Main Page Content
Home / Performance Criteria

Plan: Performance Criteria

For each measure, a performance criterion will be used to determine the level of performance necessary to ascertain whether student performance on the measure indicates that the program outcome has been achieved. Not all students in a program will perform perfectly on every measure, so program faculty must identify a threshold above which they will be satisfied that, on the whole, students who graduate from the program possess the knowledge or skill specified in the outcome.

Performance criteria must be identified prior to the collection and analysis of assessment data. When setting performance criteria, it can be tempting to set unreasonably high “nothing but the best” standards or to set unreasonably low “guaranteed to show success” standards. Both of these practices can be defeating. Over time, it is far more beneficial to a program and its students to set reasonable expectations and work toward meeting them

Avoid setting a performance criterion that says that 100% of students will achieve at a high level.  When tempted to set the threshold at 100%, consider the following scenario. If even a single student in a large program did not meet high expectations on the measure, would you conclude that your program graduates do not possess the knowledge or skill of the outcome? Probably not. Think of a reasonable standard, and set the threshold at that level.

There are cases when a program needs all students to demonstrate skills and knowledge at a minimum high standard. In these instances using a criterion of 100% of students is preferred.  An example would be a nursing program requiring all graduates to demonstrate minimum competence with safety and health standards. 

When setting criterion, consider a) the proportion of students a program can reasonably expect to perform at b) the necessary level of mastery to demonstrate necessary skills and knowledge have been attained.  Below are two examples that can be considered reasonable criterion given the context for each:

  • An engineering program expect 80% of students to score a four or higher (scale of 1-5) on rubrics focused on electrical engineering skills and knowledge. In this situation, past student performance can help determine what is reasonable to expect and measure changes over time. 
  • An aviation program requiring 100% of students meeting minimum score of four or higher (scale 105) on rubrics focused on piloting skills and knowledge.  In this situation, will student performance lead to passing national exams and performing as a safe, competent pilot? 

Programs that set performance criteria so low that they are assured of meeting their outcomes present a number of issues. Unreasonably low standards deprive faculty in those programs of the opportunity to identify strengths and weaknesses in their students’ performance, thus depriving present and future students of the benefits of program improvements that might otherwise occur. The low standards communicate to current and potential students that the faculty have low expectations for them. A program that establishes low expectations for student performance may not push students to perform at their maximum potential and may not attract the most qualified applicants.

A performance criterion is written as a statement indicating that some percentage of students will perform at or above a certain level on the measure. Examples: 

  • 80% or more of students will earn a grade of B or higher on the final exam.
    • 75% or more of students will earn a rating of “Meets Expectations” or better on the research paper.
    • 90% or more of student papers will be evaluated at a level 3 or higher using the VALUE rubric for Ethical Reasoning.
    • 85% of alumni survey respondents will report that they are currently employed in a field that is “related” or “closely related” to their degree program.
    • 80% of exit survey respondents will report that the BS JPS program contributed “Quite a Bit” or “Very Much” to the development of their critical thinking skills.
    • 75% of sampled papers reviewed will be evaluated at a level of “Satisfactory” or higher using a faculty-developed rubric.
    • 80% of doctoral dissertations will receive a rating of “Very Good” or “Outstanding” for methods using the Lovitts (2007) rubric for [academic discipline].

Course grades and course completion are not appropriate for use in performance criteria. 


Employing Rubrics: We recommend using the rubrics presented in Lovitts’ (2007) work on the assessment of doctoral assessment. Her work with doctoral faculty at institutions from across the U.S. yielded rubrics for a variety of graduate disciplines that describe the characteristics of the elements of a dissertation (e.g., literature review, methods, analysis, etc.) at four levels: Outstanding, Very Good, Acceptable, Unacceptable. The rubrics can also be used as a model for rubrics to be used for the evaluation of master’s theses, for applied or performance projects, or for other disciplines.. Such a review is distinct from the traditional defense process, and faculty may or may not choose to share the results of individual reviews with their students. Some programs have found it useful to share rubrics with entering graduate students as a means to inform them at an early stage about expectations regarding the quality of their graduate work. For large programs, it is not necessary to review and evaluate every thesis, dissertation, or project. It is acceptable to review a representative sample of student work.  Programs that utilize rubrics to evaluate the quality of theses or dissertations will write a performance criterion that indicates that a percentage of students will earn a rating of Acceptable or better on the element that relates directly to the outcome.

There are several important guidelines to consider when identifying appropriate performance criteria for your outcomes:

  1. The performance criterion must be directly related to the measure. If the measure is an exam, the performance criterion will be a threshold of performance on the exam. If the measure is a survey item, the performance criterion will be threshold of respondents’ ratings on that particular item.
  2. Write performance criteria in this format:  “XX% of students will earn a grade/rating of YY or higher on the [name of exam/project].”  Or “XX% of students will perform at or above expectations on the [licensure exam, dissertation] based upon the faculty developed rubric.” or “XX% of respondents will report that [use scale points from survey item]. 
  3. Course grades and course completion are not appropriate for use with performance criteria. As with measures, it is important to focus on the specific exam, project, etc., that will be used to measure student learning on the outcome of interest.
  4. Performance criteria related to the thesis or dissertation must reflect a standard other than passing on the first attempt. These measures represent the culmination of a student’s program of study and should be analyzed at specific levels for their achievement across a spectrum or within a singular area. Faculty developed rubrics are the best resource to use for a performance criteria of these measures. The master’s thesis and doctoral dissertation are excellent measures of student learning, but can present a challenge for faculty writing performance criteria. Many programs will set performance criteria that state that a percentage of students will successfully defend the thesis or dissertation on the first attempt. On the face, this seems to be a suitable approach. However, most graduate faculty support and closely supervise their students’ thesis and dissertation work and don’t schedule the defense until the work is satisfactory. When this is the case, a performance criterion based on success rate of first time defenses is an artificial threshold, and the program has guaranteed that it will meet the outcome. This practice also deprives programs of the opportunity to examine differences in the level of their students’ performances and identify opportunities for improvement.  Completion is not sufficient in itself.

Assessment Handbook

To assist units in the assessment planning process, we created a handbook: Effective Assessment Planning, Reporting, and Decision Making.  Please refer to this handbook as you create your assessment plans and reports. To access this handbook, please authenticate using your ASURITE.

Assessment Portal

The following link will open the UOEEE Assessment Portal where all assessment plan development and reporting activities take place.