Home / Interpret Results

Interpret Results

Summarizing/Analyzing

Note: It is imperative that assessments include only students in the specific program, which is possible for all programs yet may require seeking assistance.  Often, courses have students in more than one program and results can be skewed if program populations are not separated for analysis.  It may be necessary to involve an UOEEE college delegate or designee to aid in the isolation of student rosters for each program being assessed, yet because of its importance all programs are required to work with whomever is necessary to report only program participants.

Determine the best analysis for your data. The most helpful quantitative information will be tallies, percentages, overall/sub-scores, or averages. Your assessment plan should detail the type of analysis you will present to determine the performance criteria of the year’s cohorts, however additional analysis may help explain the results of your findings or allow a deeper look at issues of concern or remark. Qualitative summaries, though more difficult to use for performance criteria, may also produce interesting findings when looking at the grouping of issues, themes, accomplishments, and other issues under investigation. A qualitative assessment may provide the most useful data however, when an issue is detected or a performance criteria has not been met and you are interested in determining causes for student performance at unexpected levels.

If student performance met your expectations, consider components of the program (or of your assessment processes) that you believe contributed to this result. What does this tell you about student learning in this program? You may discuss a recent program change that you believe helped to improve student learning related to the measure. You might decide to focus on ongoing aspects of the program that are particularly strong and should be highlighted. You might also believe that the assessment measure(s) used were particularly well-suited to the outcome and provided high-quality information. Conversely, you might be less than satisfied with student performance, and conclude that one or more of your measures or performance criteria prevented you from identifying that.

If your data indicated that student performance did not meet your expectations on a measure, consider components of the program (or of your assessment processes) that you believe contributed to this result. Are there foundational concepts or theories that students did not adequately apply near the end of their program? If so, at what point in the curriculum could that content have been more strongly emphasized? Was a standardized test used as one of your measures not sufficiently related to your curriculum to adequately measure your students’ knowledge? Are the admissions standards for your program too lenient? You might be pleased with your students’ performance on the measures used, and now realize that your performance criterion was set at an unrealistically high level. Program faculty, as the experts on the curriculum, are the best suited to judge why student learning on a measure - or for the outcome - did not meet expectations.

Link to Assessment Reports:  https://analytics.asu.edu/teams/uoeee/universitysurveys/Pages/Home.aspx

What about mixed results? If one performance criterion was met and the other was not, you will need to interpret the information available in order to determine whether graduates possess the knowledge or skill of the outcome. Consider the following scenario:

  • Measure 1 is supervisor evaluations from an internship experience that requires students to apply their skills in a real-world environment. The performance criterion states that 80% of students will earn an overall rating of ‘Meets Expectations’ or ‘Exceeds Expectations’ from their supervisors. Your data indicate that 85% of the students received overall ratings of ‘Meets Expectations’ or ‘Exceeds Expectations.’
  • Measure 2 is an exit survey that asks how well prepared students believe they are for employment in the profession. The performance criterion states that 85% of respondents will report that they believe they are “Well Prepared” or “Very Well Prepared” for employment in the field. Eighty percent of respondents reported that they felt “Well Prepared” or “Very Well Prepared” for employment in the field.

You might believe that the internship is strongly related to the professional skills needed for entry-level positions in the field, and good supervisor evaluations indicate that the students are well-prepared for employment. If so, you might decide to assign greater weight to the evaluations than to the survey responses and conclude that the outcome was met.

Or, you might know from previous experience that the internship supervisors give high ratings to everyone, even students that you know performed poorly. In this case, you might assign greater weight to the survey responses than to the internship evaluations and conclude that the outcome was not met.

These situations require your professional judgment as faculty. There is no ‘right’ answer. The important thing is for program faculty to interpret the data about student learning and determine whether students have satisfactorily demonstrated the knowledge or skill of the outcome.

If there are ever doubts, consider additional indirect measures. If a performance criteria is not met for a writing example, consider looking at performance in a prerequisite course. Is there a trend of lower performance or is this an isolated incident in student performance? Looking for appropriate patterns in student performance may help identify additional issues for improvement or may help you determine that the measure used in the given year is not the most precise indicator towards measuring outcome achievement. Wherever possible, especially if results are in question, corroborate your findings with related data points.

For your interpretations, it might also benefit to review additional benchmarks and standards in addition to the performance criteria. These may help shed light for new understanding of issues, or might provide a more accurate criteria to which the outcome can be assessed. Some additional types of standards are listed below:

  1. Value-added benchmarks: comparing scores to scores on a similar or the same assessment measure over time can show how learning gains or detriments have developed over time.
  2. Strengths and weakness standards: analyzing the sub-scores of an assessment against one another we help illuminate the areas of struggle and success for many students.
  3. Best practice standards: think about the best performance possible and determine what elements may be lacking between the recorded student abilities and the ideal student performance. This practice of benchmarking will also be helpful even when all performance criteria have been obtained and further learning achievements are sought.

Who will use your findings?

When interpreting your results, successes and strategies for improvement, consider how to best present and disseminate the information to your audience. How is the best way to present data to members of the faculty, staff, and administrators who will be focused on continuously improving the program but may also have their own likes and dislikes within the program? Decisions about directions for improvement are always best when made in consensus with other members of the program where multiple perspectives can additionally be considered.

How about stakeholders of the program who may look to you for accountability? Students are the most direct stakeholder who will look to a program for its success. Consider the methods for answering students concerns and criticisms when even their own needs and priorities may be shifting as they move through the program.

Many members of the university faculty find assessment reporting to be unsettling. Please be assured that the purpose of assessment is not to tally the number of programs that met (or did not meet) one or more of their outcomes.  The purpose of assessment is not to penalize programs that may not have met all their outcomes or to reward those who did. The purpose is to provide an honest and accurate look at where we believe our students fully meet our learning expectations, where we’ve identified room for improvement, and the strategies we’ve identified to improve student learning.

Make Changes

If the data indicate that program graduates do not possess the knowledge or skill of the outcome, program faculty should examine the factors they believe contributed to this result, and identify any corrective measures to be taken. Some examples are:

  • Addition of course content, tutorials, assignments, or other things designed to reinforce learning on the knowledge or skill of the outcome;
  • Change in course sequence or prerequisite;
  • More stringent admissions standards; and
  • Others identified by program faculty.

If the assessment data indicate that program graduates possess the knowledge or skill of an outcome, program faculty may determine that they have nonetheless identified opportunities for improvement in course content, instructional methods, assessment processes, or other program components that will be implemented during the next assessment cycle.  Institutional and specialized accrediting bodies look for assessment results to be used to guide continuous improvement efforts.  In addition, a culture of continuous improvement supports University driven quality initiatives and allows programs to articulate the value they provide to students, employees and their communities. 

If assessment results are nearly all 100% positive for all measures, then the measures used are not sensitive enough to identify where students struggle and would benefit from continuous improvement efforts.  When this occurs, the development of new measures, particularly measures using rubrics and analytic scoring, are expected in order to build and maintain a culture of continuous improvement at Arizona State University. 

No matter what the proposed revision may be, there are a number of simple steps that will help ensure the interventions effectiveness:

  1. Develop rubrics and use analytic scoring in order to identify instructional areas where students struggle most and would benefit most from continuous improvement efforts.
  2. Create a timetable of implementation. An intervention may be a simple modification or may be a multi-tiered rollout of services and instruction. A timetable of revisions will allow a clear objective and will help the program determine when improvements may begin to be detected. 
  3. Inform all relevant parties with clear and precise directions of their role in the intervention. All members should be aware of the direction sought as well as the role that they will play towards achieving an outcome.
  4. Include the outcome in the subsequent assessment cycle. This will allow faculty to reexamine the issues related to student learning on that outcome.

Assessment Handbook

To assist units in the assessment planning process, we created a handbook: Effective Assessment Planning, Reporting, and Decision Making.  Please refer to this handbook as you create your assessment plans and reports. To access this handbook, please authenticate using your ASURITE.

Assessment Portal

The following link will open the UOEEE Assessment Portal where all assessment plan development and reporting activities take place.