Skip to Main Content

Assessment at Emerson

Assessment

A participatory, iterative process that provides data institutions need on their students’ learning, engages the college and others in analyzing and using that information to confirm and improve teaching and learning, produces evidence that students are learning the outcomes the institution intended, guides colleges in making educational and institutional improvements, evaluates whether changes made improve/impact student learning, and documents the learning and institutional efforts.

Source: The Higher Learning Commission (HLC) - https://www.hlcommission.org/

Authentic Assessment

An authentic assignment is one that requires application of what students have learned to a new situation, and that demands judgment to determine what information and skills are relevant and how they should be used. Authentic assignments often focus on messy, complex real-world situations and their accompanying constraints; they can involve a real-world audience of stakeholders or “clients” as well. According to Grant Wiggins (1998), an assignment is authentic if it:

  • is realistic.
  • requires judgment and innovation.
  • asks the student to “do” the subject.
  • replicates or simulates the contexts in which adults are “tested” in the workplace or in civic or personal life.
  • assesses the student’s ability to efficiently and effectively use a repertoire of knowledge and skills to negotiate a complex task.
  • allows appropriate opportunities to rehearse, practice, consult resources, and get feedback on and refine performances and products.

Source: https://citl.indiana.edu/teaching-resources/assessing-student-learning/authentic-assessment/index.html

Benchmark

A criterion-referenced objective performance data point that can be used for the purposes of internal or external comparison. A program can use its own data as a baseline benchmark against which to compare future performance. It can also use data from another program as a benchmark.

Source: https://case.edu/assessment/about/assessmentglossary

Bloom's Taxonomy

Bloom's taxonomy identifies a hierarchy of cognitive skills that can be developed through the process of learning. The classification is as follows: (1) knowledge (simple knowledge of facts, conceptual terms, theoretical models); (2) comprehension (an understanding of the meaning of knowledge); (3) application (the ability to apply knowledge to new situations or a changed context); (4) analysis (the ability to break material down into its constituent parts and identify the connections between them); (5) synthesis (the ability to reassemble the parts into a new and meaningful relationship); (6) evaluation (the ability to judge the value of material using explicit criteria, either developed by the learner or derived from other sources).

Source: https://www.oxfordreference.com/view/10.1093/oi/authority.20110803095513202

Capstone Courses and Projects

Whether they’re called “senior capstones” or some other name, these culminating experiences require students nearing the end of college to create a project that integrates and applies what they’ve learned. The project might be a research paper, a performance, a portfolio, or an exhibit of artwork. Capstones can be offered in departmental programs and in general education as well.

Source: https://www.learningoutcomesassessment.org/wp-content/uploads/2019/05/NILOA-Glossary.pdf

Closing the Loop

The purpose of assessment is to identify strengths and weaknesses in our practices, and to implement changes to improve the program. This critical step in assessment is often referred to as “closing the loop”. After collecting and analyzing assessment data, decisions need to be made collectively to determine whether/what changes will be made. If the data suggest that the outcome is met, the plan for the subsequent year could either be to continue monitoring the outcome to ensure consistency in quality, or to celebrate and move on to another set of outcome(s). ... If the data suggest that the outcome is not met, changes or improvement actions should be planned for the subsequent year. Keep in mind that the implemented changes need to be monitored as well to see if they actually lead to improvement.

Source: fullerton.edu/data/assessment/sla_resources/closeloop.php

Concept Maps

Concept maps are graphical representations that can be used to reveal how students organize their knowledge about a concept or process.  They include concepts, usually represented in enclosed circles or boxes, and relationships between concepts, indicated by a line connecting two concepts.

Source: https://www.cmu.edu/teaching/assessment/basics/glossary.html

Contract Grading

Contract grades essentially transform the grading process from teacher-developed criteria into an agreement between teacher and student, with considerable freedom for students to propose and assess work on their own initiative. Like the related concepts of point systems, achievement grading (Adkison and Tchudi), total quality assessment CMcDonnell), and outcomes-based grading (Pribyl), contracts eliminate highly subjective and pseudoscientific gradations (O'Hagan) and link grades to the quantity of high-quality work completed.

Source: https://wac.colostate.edu/books/tchudi/chapter22.pdf

Criteria

Guidelines, rules, characteristics, or dimensions that are used to judge the quality of student performance. Criteria indicate what we value in student responses, products or performances. They may be holistic, analytic, general, or specific. Scoring rubrics are based on criteria and define what the criteria mean and how they are used.

Source: https://www.alamo.edu/siteassets/pac/about-pac/academic-assessment/pac-glossary-of-assessment-terms.pdf

Curriculum Alignment

Curriculum alignment is essential to the development and improvement of a program of study and “can be broadly defined as the degree to which the components of an education system—such as standards, curricula, assessments, and instruction—work together to achieve desired goals” (Case, Jorgenson, & Zucker, 2004, p. 2). Alignment activities provide partners with the opportunity to work together to identify when, where, and how extensively the standards and curricular content associated with a program of study will be addressed.

Source: https://occrl.illinois.edu/docs/librariesprovider2/ptr/curriculum-alignment-module.pdf?sfvrsn=9

Curriculum Maps

Curriculum Maps are matrices that document the alignment of course student learning outcomes to program student learning outcomes and institutional outcomes. These matrices provide evidence that students have an opportunity to learn program student learning outcomes and institutional general education competencies throughout the curriculum. The process of creating them helps faculty to identify gaps in the curriculum. They also help faculty to design assessments.

Source: https://www.alamo.edu/siteassets/pac/about-pac/academic-assessment/pac-glossary-of-assessment-terms.pdf

Direct Assessment

Direct assessment is when measures of learning are based on student performance or demonstrates the learning itself. Scoring performance on tests, term papers, or the execution of lab skills, would all be examples of direct assessment of learning. Direct assessment of learning can occur within a course (e.g., performance on a series of tests) or could occur across courses or years (comparing writing scores from sophomore to senior year).

Source: https://www.cmu.edu/teaching/assessment/basics/glossary.html

Embedded Assessment

A means of gathering information about student learning that is integrated into the teaching-learning process. Results can be used to assess individual student performance or they can be aggregated to provide information about the course or program.  These can be formative or summative, quantitative or qualitative.  Example: as part of a course, expecting each senior to complete a research paper that is graded for content and style, but is also assessed for advanced ability to locate and evaluate Web-based information (as part of a college-wide outcome to demonstrate information literacy).

Source: https://www.cmu.edu/teaching/assessment/basics/glossary.html

Equity-minded Assessment

Equity-minded assessment entails the following actions:

  1. Check biases and ask reflective questions throughout the assessment process to address assumptions and positions of privilege.
  2. Use multiple sources of evidence appropriate for the students being assessed and assessment effort.
  3. Include student perspectives and take action based on perspectives.
  4. Increase transparency in assessment results and actions taken.
  5. Ensure collected data can be meaningfully disaggregated and interrogated.
  6. Make evidence-based changes that address issues of equity that are context-specific.

Source: https://www.learningoutcomesassessment.org/equity/

Formative Assessment

Formative assessment is often done at the beginning of or during a program, thus providing the opportunity for immediate evidence for student learning in a particular course or at a particular point in a program. Classroom assessment is one of the most common formative assessment techniques. The purpose of this technique is to improve quality of student learning, leading to feedback in the developmental progression of learning. This can also lead to curricular modifications when specific courses have not met the student learning outcomes. Classroom assessment can also provide important program information when multiple sections of a course are taught because it enables programs to examine if the learning goals and objectives are met in all sections of the course. It also can improve instructional quality by engaging the faculty in the design and practice of the course goals and objectives and the course impact on the program. (See also: assessment for learning)

Source: https://www.learningoutcomesassessment.org/wp-content/uploads/2019/05/NILOA-Glossary.pdf

High-stakes Assessment

High-stakes assessment of student learning often involves the evaluation of a student's final "product," whether it is a term paper, final exam, or other type of project. High-stakes assessment:

  • Encourages synthesis across an entire course or discipline
  • Requires creation of discipline-specific products (research papers, presentations)
  • Is often summative, requiring demonstration of the degree to which students have learned key course concepts and skills
  • Usually represents a larger percentage of the course grade

Source: https://resources.depaul.edu/teaching-commons/teaching-guides/feedback-grading/Pages/high-stakes-assignments.aspx

Indirect Assessment

Indirect assessments use perceptions, reflections or secondary evidence to make inferences about student learning. For example, surveys of employers, students’ self-assessments, and admissions to graduate schools are all indirect evidence of learning.

Source: https://www.cmu.edu/teaching/assessment/basics/glossary.html

Inter-rater Reliability

The consistency with which two or more judges rate the work or performance of test takers. (System for Adult Basic Education Support, 2008)

Source: https://www.alamo.edu/siteassets/pac/about-pac/academic-assessment/pac-glossary-of-assessment-terms.pdf

Learning Activities

Any of the activities students partake in as part of a course or program; these activities provide students with the necessary knowledge, skills, or habits to achieve the student learning outcomes. Learning activities may be didactic, collaborative, or active. 

Low-stakes Assessment

Also referred to as formative assessment, low-stakes assessment provides the opportunity for immediate evidence of student learning in a particular course or at a particular point in a program. Classroom assessment is one of the most common formative assessment techniques. The purpose of this technique is to improve quality of student learning, leading to feedback in the developmental progression of learning. 

Medicine Wheel Framework

A four-domain framework for developing course outcome statements ... with a focus on better supporting the educational empowerment of Indigenous students. The framework expands the three domains of learning, pioneered by Bloom, to a four-domain construction based on the four quadrants of the Medicine Wheel, a teaching/learning framework that has widespread use in the Indigenous communities of North America (Native American, First Nation, Metis, Inuit, etc.). [The framework] expands on the cognitive (mental), psychomotor (physical) and affective (emotional) domains to add the fourth quadrant, spiritual, as being essential for balance in curricular design that supports students in their learning goals.

Source: https://www.lincdireproject.org/wp-content/uploads/ResearcherShareFolder/Readings/Switching%20from%20Bloom%20to%20the%20Medicine%20Wheel.pdf

Portfolio Assessment

A portfolio is a collection of work, usually drawn from students' classroom work. A portfolio becomes a portfolio assessment when (1) the assessment purpose is defined; (2) criteria or methods are made clear for determining what is put into the portfolio, by whom, and when; and (3) criteria for assessing either the collection or individual pieces of work are identified and used to make judgments about performance. Portfolios can be designed to assess student progress, effort, and/or achievement, and encourage students to reflect on their learning. (System for Adult Basic Education Support, 2008)

Source: https://www.alamo.edu/siteassets/pac/about-pac/academic-assessment/pac-glossary-of-assessment-terms.pdf

Program Assessment

Uses the department or program as the level of analysis. Can be quantitative or qualitative, formative or summative, standards-based or value added, and used for improvement or for accountability.  Ideally, program goals and objectives would serve as a basis for the assessment. Example: How well can senior engineering students apply engineering concepts and skills to solve an engineering problem?  This might be assessed through a capstone project, by combining performance data from multiple senior level courses, collecting ratings from internship employers, etc.  If a goal is to assess value added, some comparison of the performance to newly declared majors would be included.

Source: https://www.cmu.edu/teaching/assessment/basics/glossary.html

Qualitative Assessment

An assessment which measures the differences in the qualities of responses, typically relying on detailed descriptions and evaluated using interpretive criteria.

Source: https://case.edu/assessment/about/assessment-glossary

Quantitative Assessment

An assessment which measures the differences in the quantities of responses, resulting in data based on scores or ratings which can be numerically analyzed.

Source: https://case.edu/assessment/about/assessment-glossary

Rubric

A rubric is an evaluative tool that explicitly represents the performance expectations for an assignment or piece of work. A rubric divides the assigned work into component parts and provides clear descriptions of the characteristics of the work associated with each component, at varying levels of mastery. Rubrics can be used for a wide array of assignments: papers, projects, oral presentations, artistic performances, group projects, etc. Rubrics can be used as scoring or grading guides, to provide formative feedback to support and guide ongoing learning efforts, or both.

Source: https://www.cmu.edu/teaching/assessment/basics/glossary.html

Self-assessment

A process in which a student engages in a systematic review of a performance, usually for the purpose of improving future performance. May involve comparison with a standard, established criteria. May involve critiquing one's own work or may be a simple description of the performance. Reflection, self-evaluation, metacognition, are related terms.

Source: https://www.learningoutcomesassessment.org/wp-content/uploads/2019/05/NILOA-Glossary.pdf

Specifications Grading

Specifications (“specs”) grading - related to competency, proficiency, standards-based, mastery, or contract grading - is a method that moves instructors away from assigning points or letter grades to individual assignments, from trying to determine what makes a paper an “A” versus a “B” paper, into a framework of meets expectations or does not meet expectations.

Source: https://higheredpraxis.substack.com/p/tip-specs-grading

Student Artifact

Any lasting object a student creates during a course or program of study as a demonstration of their learning or skills. Student Artifacts include papers, exams, discussion board posts, films, recorded performances, and other fixed objects. These artifacts often form the data set for student learning outcomes assessment. (Also referred to as: learning artifacts, educational artifacts, and/or student learning objects.)

Student Learning Outcomes

A knowledge, skill or disposition you want your students to master as a result of taking your course or completing your major/program. LOs are future focused: students will learn to... and provide a specific description of what a student will be able to do at the end of the period during which that ability is presumed to have been acquired. These outcomes are the focus of many assessment practices.(Note: some professional organizations may refer to these with different terms, such as objectives, indicators, abilities, or competencies). https://case.edu/assessment/about/assessment-glossary

While many resources use outcomes and objectives interchangeably, the key reason we use the term outcomes is that: "Learning goals and objectives generally describe what an instructor, program, or institution aims to do,  whereas, a learning outcome describes in observable and measurable terms what a student is able to do as a result of completing a learning experience (e.g., course, project, or unit)." 

Source: https://resources.depaul.edu/teaching-commons/teaching-guides/course-design/Pages/course-objectives-learning-outcomes.aspx

Summative Assessment

Summative assessment is comprehensive in nature, provides accountability and is used to check the level of learning at the end of the program. For example, if upon completion of a program students will have the knowledge to pass an accreditation test, taking the test would be summative in nature since it is based on the cumulative learning experience. Program goals and objectives often reflect the cumulative nature of the learning that takes place in a program. Thus, the program would conduct summative assessment at the end of the program to ensure students have met the program goals and objectives. Attention should be given to using various methods and measures in order to have a comprehensive plan. Ultimately, the foundation for an assessment plan is to collect summative assessment data and this type of data can stand-alone. Formative assessment data, however, can contribute to a comprehensive assessment plan by enabling faculty to identify particular points in a program to assess learning (i.e., entry into a program, before or after an internship experience, impact of specific courses, etc.) and monitor the progress being made towards achieving learning outcomes.

Source: https://www.learningoutcomesassessment.org/wp-content/uploads/2019/05/NILOA-Glossary.pdf

Transparent Assessment

Transparency refers to:

  • the clarity of assessment expectations for students
  • the clarity of procedures for making judgments about the quality of students’ work.

Transparency can be enhanced by:

  • providing clear task descriptions so students know what they are expected to do
  • developing clear criteria and standards/descriptions, aligned with curriculum requirements, so students know how they will be assessed
  • modeling the task so students know the level of performance expected
  • engaging in moderation processes to ensure that every student has their learning assessed equally and appropriately.

Source: https://www.qcaa.qld.edu.au/about/k-12-policies/student-assessment/understanding-assessment/principles-quality-assessment

Transparent Assignment Design

An inclusive teaching practice first proposed by Mary-Ann Winkelmes and her instructional development and research team at UNLV, transparent assignments help students understand the purpose of the assessment, clearly describe the task and how it should be accomplished, and plainly define criteria for success. Assignment transparency has been shown to significantly boost student success in terms of academic confidence, sense of belonging, and metacognitive awareness of skill development (Winkelmes et al. 2016).

Source: https://ctl.wustl.edu/resources/glossary-of-pedagogical-terms/

Validity

The extent to which an instrument measures what it purports to measure.

Source: https://case.edu/assessment/about/assessment-glossary

VALUE Rubrics

Developed by teams of faculty experts representing colleges and universities across the United States through a process that examined many existing campus rubrics and related documents for each learning outcome and incorporated additional feedback from faculty. The rubrics articulate fundamental criteria for each learning outcome, with performance descriptors demonstrating progressively more sophisticated levels of attainment. The rubrics are intended for institutional-level use in evaluating and discussing student learning, not for grading. The core expectations articulated in all 15 of the VALUE rubrics can and should be translated into the language of individual campuses, disciplines, and even courses. The utility of the VALUE rubrics is to position learning at all undergraduate levels within a basic framework of expectations such that evidence of learning can by shared nationally through a common dialog and understanding of student success.

Source: https://www.usna.edu/Academics/AcademicDean/Assessment/All_Rubrics.pdf