Home / Data Collection

Data Collection

Overview

Whenever possible, do not create separate assessment instruments to measure program outcomes.  Rather, use the processes and instruments being used in the classroom for assessment.  It is important, however, that some measures focus on identifying the skills and knowledge acquisition areas where students struggle most during the program, in addition to program outcome measured administered to program graduates.  Having this full scope of assessment information is crucial to being able to effectively focus continuous improvement efforts on strengthening student outcomes and potentially aiding with program retention.  In addition, to collect data at the skill and knowledge set levels, using a rubric with analytic scoring is strongly recommended in most circumstances. 

 Assessment Plan Requirements

Academic program assessment should not be thought of as a periodic activity with a finite beginning and end. It is a continuous and ongoing process; each cycle provides information about the degree of success from the previous cycle and informs improvement decisions and activities in the subsequent cycle. 

  • Required Minimum Outcomes: Three outcomes with two measures each.
  • Maximum Outcomes: None, yet no more than six recommended. 
  • Provide at least one direct outcome (see Assessment Plan Handbook for more directions)
  • Include/ describe program-specific and general education skills
  • Specialized accreditation or certification requirements included

Program Exclusive Populations

Often courses have students from a variety of programs enrolled, particularly lower-division courses.  When this occurs, the separation of students by program needs to occur to ensure assessment findings are not inappropriately muddled.  While it may require help from your college delegate and their resources, it is imperative to report findings for program populations only.  If your college delegate is not able to help a program separate program students within a course, contact the UOEEE Assessment Team. 

Small Population Solutions

There are numerous programs at ASU with assessment populations below 20 students, the preferred limit for assessment findings to be considered reliable.  For these programs, there are two potential changes that can be made to increase assessment populations.  First, add measures that assess students during the program when current outcomes measure only program graduates, which is the case for many ASU programs.  As an example, programs can have 30 students in a program and 10 are graduating, making the graduating population too small for an annual assessment while there are plenty of students enrolled to meet the 20 observation minimum standard.  This ‘formative” data can also be more informative for identifying areas for instructional improvement, as programs know where to focus improvement efforts. 

When assessment populations are still under 20 observations, it is then possible to combine data from past years to draw findings.  Using rolling three-year populations for assessment, many programs will be able to reach observation minimums.  Programs that continue to have fewer than 20 observations in their assessment data set, caution should be used in drawing conclusions.  When there is still not 20 observations available, please contact the UOEEE assessment team for further guidance on processing program findings. 

Rubrics and Digital Portfolios

Rubrics are very effective tools for measuring skill and knowledge attainment in many instances if developed carefully and analytic scoring is utilized. However, when designing a rubric there are a few considerations to be made. First, is the work being addressed holistic (cumulative) or analytic (item focused and cumulative)? The difference between these types is that a holistic rubric will result in a single score, thus the criteria being assessed consist of related properties that will be assessed holistically. An analytic rubric consists of criteria that are assessed and scored separately, resulting in scores for specific skills and knowledge that are combined to the course, program, department, college, and university levels. The other element to consider is whether the rubric consists of checklists, ratings, or descriptions. A checklist rubric consists of checkboxes that indicate whether a criterion was met or not.

A rating scale rubric determines the level to which a criterion exists and is preferred for most program assessments as data assisting continuous improvement efforts are created. A descriptive rubric keeps the ratings but replaces the checkboxes with spaces where brief descriptions can be written in to explain the rating. For programs that want to include outcomes that may seem ambiguous or difficult to measure, consider using AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics. The rubrics were developed as part of a large FIPSE-funded project... The rubrics can be downloaded, free of charge, https://www.aacu.org/value-rubrics. Although the rubrics were developed for undergraduate education, they also can be used to measure graduate work. Numerous examples of rubrics can also be found through the Association for the Assessment of Learning in Higher Education; AALHE Sample Rubrics

ASU has become a national leader in the use of digital portfolios, as they are an effective assessment tool allowing students to display a wide variety of learning and skills. Portfolios can show the value-added of a student’s education as it can demonstrate development across the program. Additionally, portfolios require student reflection upon their work for inclusion in the portfolio, allowing the student to choose how to document their achievement of learning outcomes. This process further involves the student within the assessment process and allows for a very holistic review of learning for students and faculty. 

Identifying General Education Skills

To meet new ABOR general education skill and habit expectations, all new programs, those undergoing academic program review (APR), or those choosing to upgrade assessment plans to meet new standards must now provide the following information on how general education-specific activities are to be assessed. 

 Table One: Areas of Knowledge

  • Plans must identify when in a program students will be exposed to each of the nine areas of knowledge; during their general studies courses, within the core curriculum, or during other periods. 
  • Programs need to indicate if an assessment “measure” will be used to report (most often a direct measure), a proxy indicator (most often an indirect measure), or in the case of general studies courses a narrative can be utilized.  Narrative can be used for general studies only if collecting the information is not practical or even possible. 
  • It is expected that plans will develop to the point where most areas of knowledge are measured as part of the core curriculum; some are by proxy and few, if any, are narratives.  

Table Two: General Education Skills and Intellectual Habits

  • Plans must contain information related to the GE skills and intellectual habits supported by the program curriculum, a supporting activity such as an internship, or covered in a general studies area.
  • Similarly, programs need to indicate if an assessment measure will be used, a proxy is the best indicator, or if a narrative is the only remaining option.
  • It is expected that plans will develop to the point where all GE skills and intellectual habits are supported by a core curriculum or other measures. 

 In addition to the information that must be considered in Tables One and Two,  the following instructions directly from ABOR Policy 2-210 must be kept in mind while developing undergraduate programs.

 From ABOR Policy 2-210: https://public.azregents.edu/Policy%20Manual/2-210%20General%20Education.pdf

  • Evaluation of general education is also part and parcel of the review of the learning objectives of each degree program and those outcomes are reflected in the academic program reviews.
  • Effective assessment depends fundamentally upon measurement and does not rely exclusively on a single project or capstone course. It …will inform curricular refinements and allow faculty & administrators to reconsider programs that do not meet expectations in terms of learned concepts and competencies.
  • Each university will utilize rubrics, based on national standards or locally developed, to gauge whether students master the essential learning outcomes and intellectual qualities that are outlined in the policy. 

 

Areas of Knowledge: Where in the program are these areas addressed?

General Studies Courses

Core Curriculum

Other: IA, Internships, Graduate School, Publications

Literature, Fine Arts & Humanities

 

 

 

Mathematics/ quantitative reasoning

 

 

 

Social/ behavioral sciences

 

 

 

Natural sciences

 

 

 

American Institutions, Economics & History

 

 

 

Composition, Communication & Rhetoric

 

 

 

Ethics and Ethical Reasoning

 

 

 

Civil Discourse/ Civic Knowledge

 

 

 

Global Awareness, Diversity & Inclusion

 

 

 

Options: Measure, Narrative, Proxy

 

Skills and Intellectual Habits: Where in the program are these areas addressed?

General Studies Courses

Core Curriculum

Other: IA, Internships, Graduate School, Publications

Written Communication

 

 

 

Verbal Communication

 

 

 

Intercultural Competency

 

 

 

Reasoning & Evidence

 

 

 

Critical Thinking

 

 

 

Ideas to Real-World Application

 

 

 

Civic Engagement

 

 

 

Civil Discourse

 

 

 

Lifelong Learning

 

 

 

Options: Measure, Narrative, Proxy

Requesting University Data

Three-year reports at the program level are generated annually and loaded onto the UOEEE Assessment Analytics website Click Here. If you do not have permission to access these reports, please contact your assessment coordinator to request permission to receive your department’s report. Instructors and administrators should also have access to course evaluation data through the Course Evaluation interface. Additional analysis for both course evaluation and university survey data can be requested here

Assessment Tools and Resources

The most commonly used assessment tools are exams, portfolios, rubrics, and university data (e.g., surveys, course evaluations).

  • Rubrics: For any subjective assessment (portfolios, papers, capstones, dissertations, etc.), rubrics are the most common method for determining student attainment of outcomes. However, when designing a rubric there are a few considerations to be made. First, is the work being addressed holistic or analytic? The difference between these types is that a holistic rubric will result in a single score, thus the criteria being assessed consist of related properties that will be assessed holistically. An analytic rubric consists of criteria that are assessed and scored separately resulting in a composite score. The other element to consider is whether the rubric consists of checklists, ratings, or descriptions. A checklist rubric consists of checkboxes that indicate whether a criterion exists or not. A rating scale rubric determines the level to which a criterion exists in a work or not. A descriptive rubric keeps the ratings but replaces the checkboxes with spaces where brief descriptions can be written in to explain the rating. For programs that want to include outcomes that may seem ambiguous or difficult to measure, consider using AAC&U’s Valid Assessment of Learning in Undergraduate Education (VALUE) rubrics. The rubrics were developed as part of a large FIPSE-funded project. More about the project can be found at http://www.aacu.org/value/ . The rubrics can be downloaded, free of charge, https://www.aacu.org/value-rubrics . Although the rubrics were developed for undergraduate education, they can also be used to measure graduate work.  Numerous examples of rubrics can also be found through the Association for the Assessment of Learning in Higher Education; AALHE Sample Rubrics 
  • Exams: Either as an objective or subjective assessment, exams can be used for outcome indicators for the completion of a course. When designing an exam both for a course as well as a program assessment, it can be helpful to design a blueprint for the exam. This will help ensure all learning goals are represented and balance among conceptual understanding and thinking skills is struck. This will make the writing of the questions for the exam easier as it is clear what knowledge and which skills a student must demonstrate to meet the learning outcome. Additionally, the test blueprint will make it easier in the review process to pair questions back to their appropriate outcomes, as well as allowing for an in-depth review of the demonstrated skills of each section of the test.
  • Portfolios: ASU has become a national leader in the use of digital portfolios, and they are an effective assessment tool as they allow students to display a wide variety of learning and skills. Portfolios can show the value-added of a student’s education as it can demonstrate development across the program. Additionally, portfolios require student reflection upon their work for inclusion in the portfolio, allowing the student to choose how to document their achievement of learning outcomes. This process further involves the student within the assessment process and allows for a very holistic review of learning for students and faculty.
  • Advantages of Using Digital Portfolios: 

    Arizona State University has a digital portfolio system with features that include artifact collection and rubric scoring that can be adapted to the course and program level. Programs are encouraged to utilize the digital portfolio system to help students build their academic repertoires as well as aid in program assessment and continuous improvement. Incorporating rubrics into digital portfolios makes course expectations transparent, allowing students to better understand how levels of performance are determined for a course or program. Furthermore, rubrics utilized within ASU’s digital portfolio system allow faculty, programs, departments, and colleges to create a history of assessment and continuous improvement efforts. 
  • University Data: Though indirect, it is important to consider the attitudes, dispositions, and values students assign to their education and learning outcomes. The best method for collecting this information is through the graduating and alumni surveys or the course evaluations. This data indicates students’ reflections on their education as a whole in addition to students’ behaviors after obtaining the program’s learning objectives. This data can provide new insight into growing fields and expanding learning opportunities to be explored for current students.

Assessment Handbook

To assist units in the assessment planning process, we created a handbook: Program Assessment Handbook.  Please refer to this handbook as you create your assessment plans and reports. To access this handbook, please authenticate using your ASURITE.

Assessment Portal

The following link will open the UOEEE Assessment Portal where all assessment plan development and reporting activities take place.