- Breadth of Knowledge
- Advanced Communication
- Quantitative Reasoning
- Global Diversity
- United States Diversity
- Past ES Courses
- Past General Ed Courses
Each rubric assesses a particular learning outcome. The rubrics can serve multiple uses:
- Use as is, to assess student work.
- Use in a modified form to better match the instructor’s or department’s intentions.
- Use as a model for designing a different rubric.
Scoring Sessions/Annual Reports
The results of annual reviews of student work in particular ES learning outcomes are reported each year in the ES Annual Reports (we also refer to these events as “scoring sessions” since assessors use UND rubrics to score a variety of student work collected from different ES courses). ES Annual Reports are prepared by the ES Director.
The rubrics are used “as is” since they were specifically and intentionally developed for use here. However, users are also welcome to use them in other ways if that makes for more effective assessment.
- Written Communication Scoring Session, December 2016: Scoring Results and Brief Analysis
- Diversity Scoring Session, May 2016: Scoring Results and Brief Analysis
- Quantitative Reasoning Scoring Session, December 2015: Scoring Results and Brief Analysis
- Oral Communication: iDashboards Results
- Information Literacy: Scoring Results and Brief Analysis
- Critical Thinking and Written Communication: Scoring Session Results/Report
Assessment Concepts and Useful Language
The following is useful language for talking about assessment.
For practical purposes, we often use this term to speak specifically about assessment of student learning. So assessment usually means the systematic collection of information about desired student learning outcomes across a group of students – and each of those ideas can be unpacked.
Assessment typically involves looking at desired student learning outcomes, which means breaking the learning down to look at specific and individual knowledge and skills that students should be able to demonstrate. So grades are broken down (disaggregated) into component parts.
The systematic bit is important: when I notice that a lot of my students didn't do very well on the last assignment, that might be best described as a "teacherly impression" rather than an assessment, because I didn't actually investigate the finding to discover if the impression is accurate (maybe the last five papers I read were really bad, which is skewing my impression?), if the poor performance is on specific aspects of the paper (maybe many students lost points for being late, which doesn't necessarily mean they haven't learned what they should have), and if there are actual patterns to the performance that show something about the skills of students across the board.
Finally, assessment involves looking across the group rather than at a single student. Although grades are disaggregated into component parts, students are "aggregated" across the entire group. The intent of assessment is to find out about the learning of students as a group rather than as individuals.
As differentiated from assessment, evaluation (within the field of higher education) usually refers to the process of collecting information to help in understanding, improving, or making decisions about an educational program or endeavor. So the numbers of students enrolled in a program of study may be critical information within an evaluation process, but that number likely is irrelevant to assessment work.
Levels of Assessment
We often talk about levels of assessment, i.e., assessing at the course level, at the program of study level, at the institutional level.
This refers to information which allows us to directly see a student's knowledge or skill as enacted in his/her work. Students can enact desired knowledge or skills in projects, in papers, in tests, in the spoken word, in problems, in clinicals, etc. Those enactments can be formal or informal. Direct assessment data can be provided using numbers (via rubrics, e.g.) or using words (via narrative descriptions). But all direct assessment data comes from looking at actual student performances of the intended knowledge or skills.
Of course, there are many ways of finding out something about students' actual learning without looking at the performance of it. Faculty can summarize impressions from a semester. Students can write reflective essays about their learning. Clinical supervisors can complete surveys about strengths and weaknesses of the students who pass through their settings. Any or all of this information can be extremely useful to understanding student learning – but it is indirect assessment rather than direct.
Either direct or indirect assessment can result in numbers – and when data are presented as numbers, they are quantitative.
Either direct or indirect assessment can result in something other than numbers – and when data are not numeric, they are qualitative.
To complicate matters, it is sometimes appropriate to treat numeric data as qualitative, e.g., when numbers are generated without attempting to ensure reliability. Findings may not be analyzable by statistical means, but rather by qualitative means.