Home / Measures & Methods

Measures & Methods

Measures & Methods

Note: It is imperative that assessments include only students in the specific program, which is possible for all programs yet may require seeking assistance.  Often, courses have students in more than one program and results can be skewed if program populations are not separated for analysis.  It may be necessary to involve a UOEEE college delegate or designee to aid in the isolation of student rosters for each program being assessed, yet because of its importance, all programs are required to work with whoever is necessary to report only program participants.

To start, identify at least two measures for each outcome. The first measure must be a direct measure, and the second can be direct or indirect.

A direct measure is one in which students demonstrate their learning through a performance of some kind. Direct measures include any student artifacts such as digital portfolios, exams, projects, papers, etc. where the students themselves actually demonstrate their knowledge or skill. An indirect measure is one that provides information from which we can draw inferences about student learning. Indirect measures do not call on students to demonstrate their knowledge or skill but rely on information reported where the student skill can be inferred. Surveys and employment data are the most common indirect measures.

Note: The information collected in the Measures and Methods section of the UOEEE New Assessment Plan portal will be used to help complete the Assessment Methods and Measure sections of new program proposals being submitted to the Office of the University Provost and subsequently the Arizona Board of Regents for approval.  

Examples of direct and indirect measures are shown below.



Digital Portfolios/ Capstone (project/paper)

Design projects

Student surveys and focus groups

Standardized tests (ETS field tests, for example)

Practical clinical assessments

Exit surveys and interviews

Presentations/oral defenses

Artistic creations or performances

Alumni surveys and interviews

Classroom exams or quizzes

Classroom discussions

Employer surveys and interviews

Classroom/homework assignments

Online discussion threads

Job placement data

Course projects

Licensure/certification exams

Admission to graduate/professional programs

Papers (research, term, creative, etc.)



Internships or practicums

Master’s theses or doctoral dissertations

Course evaluations


There are several important guidelines to consider when identifying appropriate measures for your outcomes

  •  Align Measure with Outcome. Ensure that the measure that you are writing directly illuminates the outcome you are exploring. If the outcome intends to assess writing skills, a direct measure of a classroom writing example or an indirect measure from a survey can be used to assess a student's writing abilities.
  • Utilize Rubric Items when possible, as opposed to full grades in a course or completion of an assignment, course or program as opposed to full grades in a course or completion of an assignment, course or program. Rubrics created at the skill or knowledge level allows programs to identify instructional areas where students most when coupled with analytic scoring.  The first matrix below is a simple example of using a matrix to measure quantitative literacy skill levels among students in the hypothetical BS JPS program.  These results can then be fed into the second matrix below, “Analytic Scoring of Course Level Rubrics” in order to create a course level portrait of student’s strengths and weaknesses.  This information is crucial to guiding continuous improvement efforts that can have the greatest positive impact on student outcomes. 





While using a course critical to students’ success in a program, important knowledge and skills can be evaluated using rubrics with analytic scoring. Through the use of rubric analytic scoring at this detailed level, program faculty can separately evaluate students’ ability to knowledge and skills as they relate to specific program outcomes. Such a rubric will permit faculty to give feedback (and grades) for each of the separate components. This same approach can be used at the course level and aggregated to program-wide levels easily. We will see later that this approach can also yield rich assessment information that can be used to identify specific program strengths and weaknesses, and guide continuous improvement efforts, and measure this development over time.  (Note: 3.8 was chosen in the examples above because it is > 75% of a 1-5 scale).

 Rubric with Analytic Scoring

Some specialized accreditors provide specific learning outcomes that institutions must measure. Although the language and format of those mandated outcomes may not adhere to our guidelines, you should use the specific language provided by the specialized accreditation agency. The only time you may need to restate an external standard would be to focus on the student if the standard is focused more on resources or program operations. Additionally, please make it clear when an outcome comes directly from an accreditor when designing your assessment plan.

  • Include at least one direct measure. Student skills must be directly evidenced at least once for each outcome. The assessment committee will have a direct report of their performance. You will notice publications/presentations are listed as both direct and indirect. If the committee has reviewed the student publication or presentation, this would count as a direct measure. However, if the committee receives a report of approval or acceptance by a journal or conference, this alone would be an indirect measure. There are some measures such as certificates and licensures and internship performance that could constitute direct or indirect measures based upon how the demonstration of the skill is assessed. How much information does the committee have on the individual student’s performance or contribution (i.e. rubric, grade, supervisor evaluation, etc.)?
  • Avoid creating additional tests or other assessment activities simply to satisfy your assessment data collection needs. It should be possible to use rubrics with digital portfolios, projects, exams or other measures of student learning that already occur as part of your existing instruction and testing activities. If you have difficulty identifying appropriate measures for an outcome, you may want to consider whether students are being adequately tested on the outcome – or whether the outcome is an appropriate one for your program. If the outcome is an important one but is not adequately measured, program faculty will need to identify appropriate measures.
  • Course grades are not appropriate measures of student learning. It is appropriate to use the grade on a specific exam, project, etc. that specifically measures student learning on the outcome. Course grades are based on the overall satisfaction of course requirements rather than performance on a specific program-level outcome. Those course requirements typically include several course-level outcomes that are likely related to more than one program outcome. Course grades frequently include extra credit for attendance, class participation, or other things unrelated to program outcomes. Course grades alone do not provide specific information about the concepts mastered by students or those concepts that proved challenging – important information for faculty to consider if they want to improve student learning over time.
  • Course completion is not an appropriate measure of student learning. Avoid using the completion of a single course or block of courses as a measure. The issues are the same as with course grades. Completion of a capstone (thesis, dissertation, etc.) or other works are also not appropriate; these are layered assignments that require a deeper examination via a rubric or more precise measure on a particular section. We are looking to assess the specific skill listed in the outcome for each measure, such as the analysis or writing ability.
  • Identify at least one direct measure. The second measure can be direct or indirect.
  • Identify a specific artifact and/or items within an artifact. Rather than saying “tests,” say, “Final exam in JPS 428, Senior Capstone.” Rather than “research papers,” say, “Research paper in JPS 393, Social Issues in Law Enforcement.” By identifying a specific exam or assignment in a specific course, the program can identify instructional areas that challenge students most and focus improvement efforts on these areas. For surveys, indicate the specific item(s) that will be used to measure the outcome. For example, “Exit survey item that asks the extent to which the BS JPS program helped students to develop their analytical thinking skills.” Otherwise, you may be leaving your data collection to chance and fail to collect important information about your students’ learning.
  • Don’t write a long description of the measure. It is not necessary to describe the content of an exam or assignment, a rationale for its inclusion in your assessment, or the scoring method you will use. This level of detail is appropriate to record in any program or departmental notes or minutes you will maintain. For your assessment plan, you only need to list the specific measure (final exam in [course ID, course name], senior capstone paper, oral presentation of JPS-301 [course ID, course name] project, dissertation, etc.).
  • Do not rewrite the outcome as a measure: The measure is meant to state the student work used as evidence for the assessment process. If you feel the need to restate an outcome in your measure, perhaps consider refining the scope of your outcome. Assessment plans can become vague or unaligned when skills are listed in the measure that does not correspond directly with the outcome skill.
  • Don’t combine multiple measures as one.  Avoid saying, “exams and assignments in JPS-442.” You may decide to combine the scores for multiple quizzes or homework assignments, to identify a specific subset of test items that relate to the outcome, or to identify a specific subset of survey items that relate to the item. It is appropriate to do so, and you may want to describe your measure as an aggregate (e.g., mean score) on the quizzes or items used.
  • Sample only when necessary.  When there are a large number of student artifacts that require additional faculty time and attention beyond regular classroom duties to complete, as when a rubric is used post-hoc to assessment outcomes, it is acceptable to randomly sample the artifacts.  What would a minimum sample size be in this instance that can be debated?  Yet, if a program can assess at least 20, and hopefully 40 or more, artifacts then patterns should begin to appear in the results.  This approach, however, should only be used when the assessment is not part of every student’s classroom performance. 

When data for all students are available for a program population, there is no reason to sample.  Include assessment results from all participating students whenever possible to maximize data reliability and quality decision making.  For guidance on small population sampling, contact UOEEE.


Assessment Measures and Resources

The most commonly used assessment tools are exams, portfolios, rubrics, and university data (e.g., surveys, course evaluations).

  • Rubrics: For any subjective assessment (portfolios, papers, capstones, dissertations, etc.), rubrics are the most common method for determining student attainment of outcomes. However, when designing a rubric there are a few considerations to be made. First, is the work being addressed holistic or analytic? The difference between these types is that a holistic rubric will result in a single score, thus the criteria being assessed consist of related properties that will be assessed holistically. An analytic rubric consists of criteria that are assessed and scored separately resulting in a composite score. The other element to consider is whether the rubric consists of checklists, ratings, or descriptions. A checklist rubric consists of checkboxes that indicate whether a criterion exists or not. A rating scale rubric determines the level to which a criterion exists in a work or not. A descriptive rubric keeps the ratings but replaces the checkboxes with spaces where brief descriptions can be written in to explain the rating. For programs that want to include outcomes that may seem ambiguous or difficult to measure, consider using AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics. The rubrics were developed as part of a large FIPSE-funded project. More about the project can be found at http://www.aacu.org/value/ . The rubrics can be downloaded, free of charge, https://www.aacu.org/value-rubrics . Although the rubrics were developed for undergraduate education, they can also be used to measure graduate work.  Numerous examples of rubrics can also be found through the Association for the Assessment of Learning in Higher Education; AALHE Sample Rubrics
    • Exams: Either as an objective or subjective assessment, exams can be used for outcome indicators for the completion of a course. When designing an exam both for a course as well as a program assessment, it can be helpful to design a blueprint for the exam. This will help ensure all learning goals are represented and balance among conceptual understanding and thinking skills is struck. This will make the writing of the questions for the exam easier as it is clear what knowledge and which skills a student must demonstrate to meet the learning outcome. Additionally, the test blueprint will make it easier in the review process to pair questions back to their appropriate outcomes, as well as allowing for an in-depth review of the demonstrated skills of each section of the test.
    • Portfolios: ASU has become a national leader in the use of digital portfolios, and they are an effective assessment tool as they allow students to display a wide variety of learning and skills. Portfolios can show the value-added of a student’s education as it can demonstrate development across the program. Additionally, portfolios require student reflection upon their work for inclusion in the portfolio, allowing the student to choose how to document their achievement of learning outcomes. This process further involves the student within the assessment process and allows for a very holistic review of learning for students and faculty.
    • University Data: Though indirect, it is important to consider the attitudes, dispositions, and values students assign to their education and learning outcomes. The best method for collecting this information is through the graduating and alumni surveys or the course evaluations. This data indicates students’ reflections on their education as a whole in addition to students’ behaviors after obtaining the program’s learning objectives. This data can provide new insight into growing fields and expanding learning opportunities to be explored for current students.
    • Link to UOEEE Annual Survey https://analytics.asu.edu/teams/uoeee/universitysurveys/Pages/Home.aspx  

Assessment Handbook

To assist units in the assessment planning process, we created a handbook: Program Assessment Handbook.  Please refer to this handbook as you create your assessment plans and reports. To access this handbook, please authenticate using your ASURITE.

Assessment Portal

The following link will open the UOEEE Assessment Portal where all assessment plan development and reporting activities take place.