- Assessment Consultants
- What is SoTL?
- Newsletter Articles
- Survey Descriptions and
- Summary of Findings
- University Student
- Agendas and Minutes
- Annual Committee Report
- Committee Members
- Committee Purpose
- For Committee Members
- Making a plan
- Getting the language right: Goals, outcomes, objectives?
- Some caveats and advice: Problems with creating goals
- Some caveats and advice: Problems with measuring goals
- Writing Your Assessment Plan: What's the Best Format?
The absolute basics in an assessment plan are the following:
• What should students be able to do by the time they complete your program? In other words, what learning outcomes should be achieved by the time they complete the major or certificate?
• What methods will you use to find out if they have can do the things you've named (i.e., the learning outcomes you've identified)?
• How will you ensure that the necessary information gets collected, analyzed, and discussed? Who will remind faculty? What will be the timetable? Who will ensure that analysis occurs (in a whole-department meeting or within a departmental committee)? Who will make sure that results get discussed by the faculty as a whole?
• How will all of this work get documented so that what's done in one year remains available for review and discussion two or three years down the road, when there might be new findings that should be compared?
Two additional pieces of information, while not on the list above, are also important. First, every department or program needs a mission statement, and it is important for your learning outcomes to be rooted in the program mission. If your program's mission is to prepare practitioners, for example, the learning outcomes will probably be different than if your mission is to prepare students for graduate study. Because alignment between mission and assessment planning is so important, many departments include the mission as a first statement on the assessment plan, and that's a good practice.
Second, after you've identified your learning outcomes, mapping the alignment between outcomes and courses can be extraordinarily revealing. It's common to find that some key outcomes will be introduced, taught, and reinforced in virtually every class, while other learning outcomes are barely mentioned beyond a single exposure. If a competency or skill is sufficiently important to be included in your list of learning outcomes, it will normally be emphasized in multiple courses within the program of study. Documenting and observing the alignment between your outcomes and your courses is a valuable step toward developing a solid assessment strategy – but also toward reconsidering the existing curriculum.
Getting the language right: Goals, outcomes, objectives?
Some programs have a program accreditor that mandates language for the intended learning outcomes. Perhaps your accreditor says that your statements of what students learn during college will be learning outcomes but what they should be able to do on the job will be goals. Perhaps your accreditor expects each goal (defined as what students should be able to do by the time of graduation) to be lofty and broad, but also to be unpacked via detailed objectives that describe exactly what you'll measure and what the standard for success will be.
If your accreditor uses a specific set of words for descriptions of assessment expectations, please develop a plan that uses your accreditor's terminology. It doesn't make sense to write using one set of words for your program accreditors and another set of words for UND.
If you do not have a program accreditor or if your accreditor does not prescribe terminology, then use language which makes sense to faculty in the field. In some programs, faculty write learning outcomes (whatever you choose to call them) that are specific, creating no need to pin down meaning more precisely. If so, perhaps no sub-categories (most commonly called objectives) may be necessary. In other cases, your department may want to start with an overarching set of program outcomes (which faculty might call "goals") and a supporting list of more specific learning outcomes. If your goals are broad, this kind of supporting list can provide needed clarity. "Objectives" of this sort are usually both specific and measurable. In fact, the objective itself may contain information that points to an assessment method or the "bar" you hope to see met. For example, a broad goal like "Students will communicate well" might be immediately followed by a first objective which specifies "90% of program seniors will be able to write a paper that is scored at 3, 4, or 5 on the department's rubric for effective communication."
Some caveats and advice: Problems with creating goals
Too many goals: One problem that's common when departments are getting started with meaningful assessment is developing an overly-ambitious list of goals. Faculty may feel that every required course should be recognized with its own goal – which would then be measured by the teacher of that class. This approach seems logical, but it defeats the purpose of assessing learning at the program level. Individual teachers may be (and actually should be) assessing learning in their individual classes and using what they learn to inform their own teaching. But that's providing information about individual class goals rather than larger program goals.
The purpose of assessing learning within the program is to find out how well students are doing on the outcomes that you share with your colleagues. So your program learning outcomes will usually be somewhat broader than those used in individual classes (e.g., "Students will be able to find and evaluate appropriate primary documents as part of an analysis of an historical issue or question." vs. "Students be able to accurately analyze the causes of the Civil War by drawing on appropriate documents.")
There is no rule about numbers of goals per program, but if a program identifies more than ten intended learning outcomes, perhaps it's time for faculty to take a hard look at the list. Two questions can be useful in narrowing the number of goals:
• Is this outcome so valuable and over-arching that all or most of our faculty share responsibility for helping students achieve it?
• Does this outcome describe a key competency which we believe that virtually all of our program graduates should have achieved?
Confusing program aims with program learning goals: Faculty in most departments identify programmatic aims, ambitions, or benchmarks (often called "goals") which do not describe intended learning outcomes. For example, you aim to enroll 10 new students in the program each year, you want 90% of program grads to obtain jobs related to the field, etc. Faculty in your department may name other kinds of intentions and benchmarks as well: "every student will turn in a program of study before completing 15 credits," "every student will develop a portfolio demonstrating abilities and skills." Those are important indicators for your department to track. But learning goals or outcomes are those that identify specific competencies that students will demonstrate they have learned. In other words, "what will your program graduates be able to do as a result of their learning by the time they graduate from UND?"
It makes sense to set benchmarks for program success. It's also important to identify the learning outcomes that program graduates will be able to demonstrate. Distinguishing between the two is key, however, and it will be easier if you're thoughtful about the labels you use for each.
Choosing the right verbs: Sometimes we describe outcomes in terms of what students will "know," but thinking in terms of knowledge often results in an unfortunately vague and unhelpful list of learning outcomes. If your students will "know most major American and English authors as well as many minor authors from at least some of the time periods," what will that enable them to do? The learning outcome you want to see them do may be something like "Students will be able to analyze a previously unfamiliar work in relation to its literary and historical context." Working toward learning outcomes that are specific and describe things students will do helps pin down what's really important. Furthermore, writing specific learning outcomes help you see the kind of assessment method you'll want to use.
Avoiding, as an article of faith, non-"measurable" outcomes: Although there's a lot to be said for being specific, many programs will have one or two desired competencies that won't lend themselves well to that kind of wording. If they're important, keep them on your list. Perhaps looking for evidence of those outcomes will help you fine-tune them later ("what would have persuaded me that Student X really did achieve that learning outcome?"). And, in the meantime, you will have retained the integrity of your program by continuing to pay close attention, through whatever methods seem appropriate, to an outcome that faculty in the program genuinely value.
Some caveats and advice: Problems with measuring goals
Failing to use both direct and indirect measures: Direct assessments are those which involve looking at student work that actually demonstrates the learning identified by your goal or outcome. Each student's work is then rated or scored (with numbers or via narrative) specifically in terms of that learning outcome. Finally, ratings or scores earned by many different students are combined so that conclusions can be drawn about overall student achievement of that specific learning outcome.
So imagine, for example, that you want to find out how well students are doing on their presentation skills. You directly assess that by observing student presentations and scoring them on the aspects of presentation that you have identified as important. You might use a rubric to score each criterion, or perhaps you write a narrative of each student's strengths and weaknesses (related to the criteria you've identified) as you observe the presentations. Then you compile the information. If you've used scores, you'll probably count how many students scored at each point on the scale for each criterion of interest. If you wrote brief narratives, you'll look back through them for themes that describe patterns, in relation to criteria of interest, observed across all the students. In either case, you'll see the patterns of strengths and weaknesses, and consider that information in relation to what you had intended (and hoped) to see demonstrated. That's direct assessment. And every goal or learning outcome should be directly assessed.
Direct assessment contrasts with indirect assessment, which involves eliciting perspectives about student learning. Indirect assessment is most often done by asking students, usually via a survey or informal writing assignment, to describe their own sense of confidence in their ability to do whatever is specified as an intended learning outcome. For example, students might score themselves on their perceived ability to do a high quality presentation, perhaps using the same rubric you're scoring with or perhaps by writing paragraphs about what they see as their strengths and weaknesses. Asking all program faculty to summarize their impressions of student presentation skills (without observing and rating actual presentations) would also be an indirect assessment.
Perception information (indirect assessment) is an easy-to-collect and worthwhile assessment strategy. Student perceptions of their learning are particularly important, and information from a compilation of student perceptions is especially useful when paired with direct assessment findings about the same learning outcome.
Too much assessment: Just as it can seem logical to have goals for each course, it may seem intuitively logical to have methods that require every teacher to collect work products and analyze them for assessment information in every course – or at least once every semester. While regular participation in assessment is important, there is no value in becoming buried in data. A better strategy is as follows:
• Identify two or three different ways of looking at each learning outcome, ideally starting with methods or tools which help you see learning near the time of program completion (if students could do what's expected at the end of the fifth semester but have lost a competency or two by the time of graduation, that's not particularly satisfactory; what really matters is what they can do when they leave the university).
• To the degree possible, look for opportunities to make those methods overlap, so that a single method can help you look at multiple learning outcomes.
• Establish a cyclical rotation of assessment so that every method or "tool" is used every two or three years. Key methods may be used more frequently, if deemed reasonable and appropriate.
• If you find (once information begins rolling in) that your findings are generating more questions than answers, develop additional strategies to dig more deeply into areas where you need to know more.
Writing Your Assessment Plan: What's the Best Format?
There is no "best" way to write an assessment plan and it makes sense for you to use a format that seems appropriate for your program(s). If you're looking for examples of plans, you can check out the pages for Model Assessment Plans and Departmental Assessment Plans. The plans listed as models are working well for faculty in the programs for which they were developed. However, you'll find additional good examples on the larger list, and perhaps you'll find that a plan from a department with programs similar to your own provides an especially useful starting point.
Another option, however, is to work from a template. A team of UND faculty developed an assessment plan template several years ago, and many departments still prefer to use that template since it serves as a prompt for consideration of the various elements that make up a plan. The template they developed includes a narrative section (providing spaces to identify the program mission, goals for student learning in the program, and objectives related to the various goals). It also includes a matrix section where faculty can map the alignment from goals to educational experiences, assessment methods, timeline for collection of data, oversight responsibilities, and use of results. Thinking these elements through while developing your plan may be useful, especially for faculty in newly developed programs.
The following is an example of a fictitious program assessment plan designed to help you understand how to use the template.