Assessment Vocabulary

  • Program: A work unit that contributes to the mission of the University. This could be an academic department, an academic support unit, or any other work unit as defined by the appropriate vice president. Academic Departments must complete program evaluations and student learning assessment for their majors. Programs that do not have an instructional mission with students may only need to complete Program Evaluations and not Student Learning Assessment.
  • Program Evaluation: To be completed by ALL programs on campus. Evaluation looks at how well the program is achieving its desired outcomes as outlined in the unit’s Five-Year Plan and as reflected in the program’s mission statement.
  • Annual Program Review (EOY Report): A report to be completed each spring by each program on campus. The request for End-of-Year reports comes either from the Institutional Research Office or from the appropriate vice president. While content addressed in each annual program review may differ, reports all respond to the success during the past year of meeting the program’s desired outcomes. End-of-Year reports typically include program objectives for the coming year.
  • Student Learning Assessment: Assessment is a means for collecting and evaluating data related to student performance on relevant student learning goals and outcomes. It does not measure an individual student’s progress, but examines the success of a sample of students in demonstrating skill or knowledge (defined by the program) related to learning.
  • Student Learning Assessment Plan (SLAP): A coherent plan to measure student learning over a specified time period. The creation of an assessment plan should begin with the 5-year plan and may then be altered as needed, based on changes (e.g., curriculum changes, results of previous assessments, etc.). A copy of the revised SLAP should be sent to the IR Office.
  • Student Learning Goals: A general statement describing what the program hopes students will achieve by completion. “e.g., Students will demonstrate college-level writing skills.”
  • Student Learning Outcomes: Statements elaborating on what specifically a student will do/say/produce to demonstrate mastery of a learning goal. “e.g., Students will develop a paper that demonstrates understanding of correct word usage, punctuation and theme development.” Individual courses in a curriculum may also list course-specific student learning outcomes not tied directly to a program’s student learning outcome. “e.g., students will demonstrate familiarity with using the XXX software package in completing assignments for this course.”
  • Course Map: A matrix showing either learning goals or learning outcomes on one axis and courses for the major (or other required experiences) listed on the other axis. Program directors (e.g., chairs) will indicate in which cells (courses) students will learn about the required goals/outcomes. Programs may use terms such as “high, medium or low” to indicate depth of coverage of the concept or other categories that they develop such as “introduce, reinforce, advanced”.
  • Assessment Schedule: Similar to a course map, this matrix indicates by cell, when (e.g., fall 2012) goals or objectives are targeted to be assessed. It is recommended that each goal/objective be measured every 3-5 years.
  • Annual Report on Assessment Progress (ARAP): The ARAP summarizes the assessment measures used during the previous year, the results of student performance and the specific use of the assessment results in modifying the program of instruction.
  • Assessment Strategy: The assessment of each student learning outcome should be designed with a specific plan which answers the following questions:
    • Who is being measured (all majors, a sample of one class, etc.)
    • Under what conditions will measurement occur?
    • What is being measured?
    • Who will be doing the measurement/evaluation?
    • A statement of how well must students perform for “success” to be achieved. Sometimes called “Criteria for Success.”
  • Results: Outcome from measurement of students’ performance (e.g., how many students scored at each level above and below the level of success.)
  • Use of Results: Sometimes called “closing the loop,” this process involves program members reviewing outcomes assessments and making decisions in two areas:
    • Program improvements or changes based on the student performance. e.g., if students do not perform well, a program may ask itself “How can we supplement our instruction to ensure more students learn this?” If students perform too well, a program may ask itself “How can we increase the challenge for students or how can we strengthen the measures for demonstrating success? Satisfactory student performance may result in no changes being implemented.
    • Changes to Student Learning Assessment Plan SLAP (e.g., adding an additional goal or Student Learning outcome).

Other Terms Used When Discussing Evaluation & Assessment

  • Alignment: Shows similarities in the progression of goals and measures. For instance, a program’s mission statement should be reflected in its fiveyear plan. The plan should be reflected in the Overall Student Learning Assessment Plan, etc. One should be able to clearly understand the flow and connection of ideas from the highest levels of planning to the most specific outcomes in an assessment plan.
  • Direct Measures: Measures of student performance on which the student’s behavior is evaluated. Direct measures provide concrete observable evidence of behavior change. These measures might include exams, papers, projects, interactions with others or other activities on which an external evaluator may observe and comment. Direct measures may be administered at the program level (e.g., majors complete the ETS test) or at the course level (e.g., students are evaluated on their ability to compare two theories). Pre-and post-test measures can be used to tie student behavior change to learning in the program, although assessment of mastery does not require multiple tests.
  • Indirect Measures: Evidence (qualitative and/or quantitative) that supports the achievement of a learning goal, but cannot be tied specifically to students meeting an identified learning outcome. Indirect measures may include:
    • Perceptions of learning gained through inquiry methods such as surveys, interviews, focus groups and reflective essays.
    • Indictors of student success that support learning or program outcomes such as retention and graduation rates, placement rates, success in student competitions, alumni surveys, use of program resources etc.
  • Course-Embedded Assessment: Measures of course and program learning outcomes that are administered and evaluated within a classroom setting as part of a course. These activities may or may not contribute to the grade in the course. Course-embedded assessment may include direct measures (e.g., completion of a nationally normed test of content) and indirect measures (e.g., completion of survey of course satisfaction).