Skip to main content

Fredonia's Assessment Cycle

Fredonia's assessment cycle "begins" every year in May when units generate goals for the upcoming academic year. The cycle "ends" in June when units wrap up their assessments for the previous cycle. Assessment by month is listed below.

Month Annualized Assessment Calendar 
May
  • Discuss Goal Setting with Supervisor to Ensure it Aligns with Divisional Goal for the Upcoming Academic Year
  • Set Goals
September
  • Campus Workshops on Goal Settings
October
  • Campus Workshops on Goal Settings
February
  • Campus Workshops on Measurements and Progress
March
  • Campus Workshops on Measurements and Progress
April
  • Units Analyze Results
  • Units Send Results to Assessment Office
June
  • Assessment Office Compiles Reports and Distributes to Supervisors
  • Supervisor Comments on Plans by the End of June
  • Units and Supervisor Use Plan Results to Make Decisions and  Enact Changes (If Applicable)
July
  • University Assessment Index Presented to Cabinet

Specifics About the Template

There are a few things to know about this template. 

  • Down below, you can find the links to the AAA forms. You'll need to download them to be able to fill in the templates.
  • This template applies to just one goal. If you have multiple goals, you’ll need a template doc for each goal. We have included a space for you to show Goal x of x, to conveniently note if you are submitting more than one template doc. You may submit more than one file (one for each goal), or you may optionally combine the docs into one file in Acrobat when you have to submit!

  • Templates should be typed, handwritten ones cannot be accepted.

  • Please, LABEL the template file as follows: Name of the Division, Name of the Unit, Goal Number, and Year. For example:                                                                                                                                                          UA - Alumni Affairs - Goal 1 - 2022-2023.

  • Remember if you are in Word or PDF (your choice) and are collaborating with a colleague you will need to keep track of versions, should you both edit the same file(s).

  • All of the fields automatically stretch (to hold your content), but there will be limited editing tools for these fields.

  • If you are referring to charts or other data sets, consider creating a file to hold the information and submitting the file as an Appendix (not as a hyperlink since we may keep these in a hardcopy binder).

  • Please do NOT lock the document when you submit it, as your Dean or Supervisor will be inputting feedback in the same template after you submit it. Your Dean or Supervisor will submit this to the Assessment office once finished. 

                                                  Printable PDF With Assessment Calendar & Instructions

2021-2022 Action & Assessment Plan

2021-2022

2022-2023 Action & Assessment Plan

2022-2023

2023-2024 Action & Assessment Plan
2023-2024
2024-2025 Action & Assessment Plan
2024-2025

Assessment: The systematic collection, review, and use of information about educational programs and other support programs undertaken for the purpose of program improvement, student learning, and development.

Assessment Plan: An annual document which identifies expected outcomes for a program and outlines how and when the identified outcomes will be assessed.

Assessment Report: An annual document based on the Assessment Plan that presents and explains assessment results and shows how assessment results are being used to improve and/or enhance the program, teaching, and learning.

Mission Statement: A concise statement outlining the purpose of a department or program.

Goals: Broad, general statements about what an entity will accomplish/ provide or about how students will be changed (i.e., learning and development goals). Goals often exist at the institution, division, department, and program levels and should be aligned from level to level.

Outcomes: A desired effect of an event, activity, program experience, etc. that is specific and measurable. Outcomes are more specific than goals.

  • Student Learning Outcome: Statement indicating what a student will know, think, or be able to do as a result of an educational experience. Sometimes referred to a learning objective.

  • Program Outcome: Statement indicating what a program (including its faculty and staff) or process intends to achieve or accomplish, such as improving student/faculty ratio or increasing student participation in faculty research.

Curriculum Map: A matrix or table representation of a program’s learning outcomes showing where the outcomes are covered within the program’s courses or other student experiences (e.g., internship).

Assessment Method: A process employed to gather assessment information.

  • Direct Methods: Processes employed to assess student learning directly by requiring students to demonstrate knowledge and skills. Rubrics are often used to evaluate student learning.

  • Indirect Methods: Processes employed to assess a program and/or student learning indirectly by gathering indirect information related to student learning or program success. Examples of indirect methods include questionnaires which ask students to reflect on their learning and satisfaction surveys of employers.

Results: Data and/or information gathered from assessment methods.

Analysis of Results: Examination of the data gathered during the assessment cycle.

Action Plans: Actions intended to improve the program or assessment process based on the analysis of findings and reflective consideration about what actions, if any, should be taken.

Closing the Loop: Using assessment findings to make decisions and enact change (as applicable); any changes should then be part of the assessment plan and process.

 

Adapted from Texas A&M Office of Institutional Assessment
http://assessment.tamu.edu

What is Assessment?
While one can find many definitions of assessment, generally speaking, assessment can be considered the systematic collection, review, and use of information about educational programs and services undertaken for the purpose of quality improvement, planning, and decision-making.

There are many benefits of assessment in higher education, including:
- Enhanced student learning, development, and engagement
- Stronger programs and services that are self-studied and refined
- Opportunity to make improvements based on accurate evaluations of need
- Improved communication and collaboration amongst units/offices/departments
- Increased accountability with stakeholders

Purposes of Assessment
Two common phrases surrounding assessment recently are "assessment for improvement" and "assessment for accountability." While assessment for accountability's sake is an important reason (and often the impetus) to initiate and conduct assessment, the real benefit to an institution and its students come from the discussions and changes that happen as a result of assessment for improvement. Many faculty and staff are motivated by the benefits of focusing on assessment for the sake of improving the quality of teaching, learning, programs and services, and planning and decision-making. The purpose of assessment for accountability is to demonstrate the effectiveness of programs and services across the institution to various (and often external) audiences, including the federal and state governments, taxpayers, employers, and parents. An assessment cycle effectively addressing assessment for improvement will also provide the necessary evidence for accountability.

Types of Assessment Data
A variety of information is needed to fully understand and make decisions related to an institution's programs, policies, and practices. As a result, there are different types of assessment, each geared toward collecting a specific type of data:

  • Tracking/usage: The data from this type of assessment identifies who is enrolled in programs, participating in activities, taking advantage of services, etc. Essentially, the data set consists of demographic information about people who are "coming in the door." This type of assessment is also helpful for identifying which populations might not be enrolling or participating, and can then lead to decisions regarding outreach and marketing.
  • Satisfaction/importance: The purpose of this common type of assessment data is to identify the importance of and satisfaction with educational experiences and services (e.g., availability of courses, library resources, career preparation programs, athletic facilities, communication of information). These attitudes can be analyzed together in order to determine the need for action. For example, if students are very dissatisfied with an aspect of their college experience that they also rate as very important to them, the institution may consider dedicating more resources to that item than if it was something that was not very important to them.
  • Needs: A needs assessment is conducted to determine what might be missing from the campus experience, and often includes satisfaction and tracking data. Common student populations for which needs assessments are targeted include veterans, commuters, transfers, and non-traditional students. A needs assessment could also be conducted to assess how facilities and services could be improved or enhanced. Often the data from needs assessment can be utilized by many departments and offices across the campus.
  • Learning outcomes: Assessing learning outcomes involves identifying learning goals or outcomes, providing experiences to facilitate the learning, assessing whether or not the intended learning occurred, as then using the resulting information to modify or enhance the teaching and learning process. See Assessment of Student Learning for more information.
  • Campus climate: The purpose of a campus climate assessment is to gather information related to how various constituents (e.g., students, faculty, staff, administration, visitors) perceive the campus. These assessments can be general in nature, often providing data about various aspects of diversity on campus, or they can focus on particular aspect of the campus climate (e.g., the climate for constituents that identify as LGBTQ).
  • Cost effectiveness: Generally speaking, data from this type of assessment is used to determine the extent to which the cost (in terms of not only money, but also time and other resources) of a program or service is aligned with the benefits to the campus community.

Assessment Cycle
What is often referred to as the assessment cycle consists of more than just the actual step of assessment. Generally speaking, the cycle includes:

  1. Identifying goals or outcomes at the division, department, and/or program level.
  2. Implementing strategies for the achievement of the goals or outcomes.
  3. Assessing the extent to which the goals or outcomes were achieved using appropriate methods.
  4. Utilizing the resulting information to improve or enhance the strategies for accomplishing the goals or outcomes.

See Assessment of Student Learning and Assessment of Institutional Effectiveness for more information regarding the cycle specific to those topics.

Assessment of student learning involves four primary steps that serve as a continuous cycle:

  1. Develop clearly articulated learning outcomes.
  2. Provide purposeful opportunities for students to achieve those learning outcomes.
  3. Assess student achievement of the learning outcomes.
  4. Use the results to improve teaching and learning.

1. Developing Learning Outcomes
Learning outcomes – sometimes referred to as learning goals or objectives – exist to identify what students will know, think, or be able to do as a result of a learning experience. Learning outcomes can exist for programs and experiences both in and out of the classroom. Huba and Freed (2000) state that effective outcomes:

  • Are student-focused
  • Focus on what is learned rather than how it is learned
  • Reflect the institution’s mission and the values it represents
  • Align at the course/program, department, divisional, and institutional levels

Generally speaking, learning outcome statements should include the following:

  1. Identification of who is doing the learning (e.g., students)
  2. The knowledge or skill that will be learned (e.g., apply the scientific method)
  3. The experience in which the learning will occur (e.g., the course)

Of particular importance is the specificity of the knowledge or skill expected to be achieved. Programs or departments may choose to write broad learning goals, but more specific outcomes that break down the goal into measurable components would also be necessary in order to allow for assessment. For example, a broad, program-level learning outcome stating that "Graduates of this program will be able to communicate effectively" is too broad to be assessed. In contrast, a corresponding learning outcome for a course in that program stating that “Students in Senior Capstone courses will…” is specific enough to be assessed.

One way to ensure that learning outcomes are specific is to make sure that they answer the question "What does that look like?" or "How is that defined?" when stating the knowledge or skill. Using the example above, the question would be "What does effective communication look like?" From there a breakdown of the concept of "communication" can lead to outcomes pertaining to each of the various aspects of communication (e.g., written, oral, and listening skills).

The domains of Bloom's taxonomy, particularly the cognitive domain, are a good resource for specifying the intended behavior in a learning outcome. Because each level builds on the preceding level, it is important to give students adequate opportunity to reach the more complex levels of learning. For example, in foundational coursework the primary outcomes may focus on acquiring knowledge in a field, while outcomes in advanced courses may focus on evaluation or creation of knowledge.

2. Providing Opportunities for Learning
Once the intended learning has been identified, the next step is to determine the circumstances under which students will be able to learn the knowledge or skills. For academic departments, this is often addressed through aligning specific courses in the curriculum with specific learning outcomes. Departments in Student Affairs also need to be purposeful regarding the learning experiences they provide for students in order for there to be ample opportunities for the intended learning to occur. This can be achieved through aligning programming and leadership experiences with specific learning outcomes. In all cases it is important to be intentional with regard to matching the learning experience with the intended outcome(s).

3. Assessing Learning Outcomes
Due to the nature of learning outcomes, it is essential to utilize direct methods of assessment in order to have evidence of learning. Direct methods of assessment measure actual student learning; they do not rely on measurement of self-reported learning or satisfaction with learning experiences. Also consider . . .

Assessing multiple learning outcomes with one method:
 - Often not a 1:1 relationship; one method can usually be used to assess learning for several outcomes. For example, a student's portfolio may serve as evidence for several learning outcomes.

Using multiple methods to assess one learning outcome:
 - Look for same results across multiple data collections
 - Build upon or relate results from one assessment to another
 - Use data from one method (e.g., a test) to inform another method (e.g., a rubric)

4. Using the Assessment Information
Assessment is often considered "done" after data collection has ended. In order for assessment to serve its purpose, however, the data collected needs to be reviewed, discussed, and disseminated as appropriate. More importantly, actions that will be taken as a result of the data should be identified and implemented. These changes should then be assessed, leading to continual cycles of assessment and improvement of educational practices, a process often called "closing the loop."

Assessment of Student Learning and Accreditation
Standard 14 of the Middle States Commission on Higher Education (MSCHE)'s accreditation standards is dedicated to the assessment of student learning. The four steps above outline what they consider the "teaching-learning-assessment" cycle. Further information specific to MSCHE's expectations regarding this standard can be found here.

Source:
Huba, M. E. & Freed, J. E. (2000). Learner-centered assessment on college campuses: Shifting focus from teaching to learning. Boston: Allyn & Bacon.

Assessment of institutional effectiveness is parallel to assessment of student learning and involves four primary steps that serve as a continuous cycle:

  1. Define clearly articulated institutional and unit-level goals.
  2. Implement strategies to achieve those goals.
  3. Assess achievement of the goals.
  4. Use the results to improve programs and services as well as inform planning and decision-making.

1. Defining Goals
At each level of the institution, goals are developed to identify specific ways in which the mission of the division, department, or program is carried out. In some cases goals are developed in conjunction with more specific objectives indicating how the goal will be achieved. Overall, the goals establish what it is that the faculty or staff of a unit (e.g., department, program) intend to accomplish. Goals that focus on what students should learn are learning goals, not department or program goals, and fall under the realm of assessment of student learning. Using previous assessment results and conclusions can be very informative and helpful when goal setting. 

2. Implementing Strategies
While the essential functions and responsibilities of the unit tend to also serve as the strategies through which goals are achieved, it is important to ensure that each goal is intentionally addressed through the roles and responsibilities of the faculty and staff.

3. Assessing Achievement
The methods for assessing goals tend to be more varied than when assessing student learning. In some cases, the needed data is simply part of existing institutional datasets and only require retrieval. In other cases data needs to be actively collected and may fall into any of the categories listed on the overview page. Tools for assessing goals may include surveys, focus groups, activity logs, tracking databases, institutional datasets, checklists, and rubrics. In all cases it is important that the data collected is directly related to the goals being assessed.

4. Using Assessment Information
Assessment is often considered "done" after data collection has ended. In order for assessment to serve its purpose, however, the data collected needs to be reviewed, discussed, and disseminated as appropriate. More importantly, actions that will be taken as a result of the data should be identified and implemented. These changes should then be assessed, leading to continual cycles of assessment and improvement of practices, processes, and policies, a process often called "closing the loop."

Assessment of Institutional Effectiveness and Accreditation
Standard 7 of the Middle States Commission on Higher Education (MSCHE)'s accreditation standards is dedicated to the assessment of institutional effectiveness. The four steps above outline what they consider the "planning-assessment" cycle. Further information specific to MSCHE's expectations regarding this standard can be found here.

Enhancing and improving educational practices, processes, and policies as a result of evidence-based decision-making and change implementation is the purpose of the assessment process, and is the main crux of the assessment cycle. The documentation of the process and resulting decision-making is important for being organized, transparent, and accountable.

Assessment Plans
A critical component of effective assessment is the planning process. Planning is important because when it takes place, proper attention is given to all aspects of the assessment process:

  • Appropriate methodology considering the goal or outcome being assessed
  • Data analysis needs and the resources available
  • Options for sampling
  • Use and review of results
  • How results will be shared
  • How the information will be used for improving teaching and learning or institutional effectiveness

Assessment planning is an activity that a program or department should undertake at the beginning of their assessment cycle timeframe, which is usually the academic year. As such, the planning process should be started by late summer and an assessment plan finalized by early in the fall semester in order to allow for adequate time to implement the plan.

What is an assessment plan?
At the most basic level, an assessment plan is a document (e.g., in Word or Excel) that outlines:

  • Student learning outcomes or department goals to be assessed during that academic year
  • Direct and indirect assessment methods used to demonstrate the attainment of each outcome or goal
  • Brief explanation of the assessment methods, including the source(s) of data
  • Indication of which outcome(s) or goal(s) is/are addressed by each method
  • Intervals/timelines at which data is collected and reviewed
  • Individual(s) responsible for the collection/review of data

Additional components of an assessment plan may include the mission of the department or program, curriculum maps aligning outcomes with courses, and a detailed implementation plan for each method or outcome/goal. There is often an assessment plan template that is utilized by all departments within a college or division to ensure that all aspects of the planning process are addressed and submitted in a consistent format to leadership for review.

Assessment Reports
Once an assessment plan has been implemented and data has been collected, it is time to further consider the various requirements and other options for reporting, or more generally, sharing the assessment information. Departments and programs are often required by leadership to submit an annual report, including a section on assessment. This information may be part of the annual report itself, or a separate document. As with assessment plans, an assessment report template is often created to ensure consistency in reporting among departments or programs within a division or college. (There are several examples of this at State University of New York at Fredonia; see College of Arts & Sciences and the Division of Student Affairs.) Making assessment reports available to stakeholders (via a website, for example, see the Fredonia Computer & Information Sciences Assessment web page) is a way to increase transparency of the evidence-based decision-making process.

What is an assessment report?
An assessment report is essentially an extension of the assessment plan. Sometimes departments or programs use one document that serves as both the plan and the report. The majority of the document is completed during the planning process, and once data has been collected, reviewed, and discussed, the reporting components are then completed. An assessment report should accomplish the following:

  • Outline the student learning or program outcomes or goals assessed during the assessment cycle timeframe
  • Identify and describe the specific assessment method(s) and tools used to gather evidence for the outcomes or goals
  • Identify the specific source(s) of the data
  • Provide brief results of each method and the extent to which the outcome or goal was achieved
  • Provide a summary or conclusions regarding the assessment process and results
  • Identify how the results will be shared and with whom
  • Identify how the assessment data contributes to decision-making and the actions that will be taken as a result of the information

The assessment plan for the next year should reflect aspects of the assessment report from the previous year, as assessment is a systematic and continuous cycle and is the mechanism (or "means" to) for improving educational practices, processes, and policies (the "end").

In addition to assessment reports, there is a variety of other ways to share assessment information with different audiences, including websites, brochures, presentations, and social media. In particular, finding ways to share assessment results with students contributes to their increased understanding of why they are asked to participate in assessment and how they benefit from it.

An important step in the assessment process is choosing an appropriate method for collecting data. When considering how to assess your goals or outcomes, it can be helpful to start by thinking about answers to the following questions:

  • What type of data do you need?
  • Has someone already collected the information you are looking to gather?
  • Can you access the existing data?
  • Can you use the existing data?
  • Is there potential for collaboration with another individual, program, or department?
  • How can you best collect this data?
  • How will you specifically use the information you collect?

The most important aspect of choosing a method is ensuring that the method will provide the evidence needed to determine the extent to which the goal or outcome was achieved. Decisions about which assessment methods to utilize should be based primarily on the data that is needed for the specific goals and outcomes being assessed, not on past data collection efforts or convenience.

Direct and Indirect Methods of Assessment
There are many assessment methods to consider, and they tend to fall into one of two categories: direct and indirect methods. When assessing student learning in particular, direct methods are often needed in order to accurately determine if students are achieving the outcome.

  • Direct Method - Any process employed to gather data which requires participants to demonstrate their knowledge, behavior, or thought processes.
  • Indirect Method - Any process employed to gather data which asks participants to reflect upon their knowledge, behaviors, or thought processes.

For example, if a department or program has identified effective oral communication as a learning goal or outcome, a direct assessment method involves observing and assessing students in the act of oral communication (e.g., via a presentation scored with a rubric). Asking students to indicate how effective they think they are at communicating orally (e.g., on a survey-like instrument with a rating scale) is an indirect method.

Direct Evidence of Student Learning
Sources of direct evidence of student learning consist of two primary categories: observation and artifact/document analysis. The former involves the student being present, whereas the latter is a product of student work and does not require the student to be present. Here are some examples of each:

  • Observation opportunities: performances, presentations, debates, group discussions.
  • Artifact/document analysis opportunities: portfolios, research papers, exams/tests/quizzes, standardized tests of knowledge, reflection papers, lab reports, discussion board threads, art projects, conference posters.

The process for directly assessing learning in any of the above situations involves clear and explicit standards for performance on pre-determined dimensions of the learning outcome, often accomplished through the development and use of a rubric. For example, assessment of the learning outcome “Students in Research Methods will be able to document sources in the text and the corresponding reference list.” could be assessed by randomly selecting papers from the course and using a rubric to determine the extent to which students are actually able to document sources. It is important to note that stand-alone grades, without thorough scoring criteria, are not considered a direct method of assessment due to the multiple factors that contribute to the assignment of grades.

Indirect Evidence of Student Learning 
In addition to the sources of direct evidence, there are also other types of data that indirectly provide evidence of student learning. While data of this nature can be useful, it is important to note that direct evidence is needed to fully assess student learning outcomes.

Examples of indirect assessments include:

  • Student participation rates
  • Student, alumni, and employer satisfaction with learning
  • Student and alumni perceptions of learning
  • Retention and graduation rates
  • Job placement rates of graduates
  • Graduate school acceptance rates

Implementation Plan
Once methods have been discussed, it can be helpful (and ensure timeliness) to think about the assessment implementation plan for each method:

  • What: What specific data do we need to collect?
  • Who: Who is responsible for implementing the assessment?
  • Whom: From whom are we collecting this data?
  • When: When are we collecting this data? (i.e., What is the timeline for data collection?)
  • How: How will we collect this data (i.e., What resources will be used to collect the data?)
  • Why: Why are we collecting this data (i.e., What do we plan to do with it?

The answers to these questions are often discussed as part of the assessment planning process and may be included in assessment plan documents.

There are many websites, organizations, and books dedicated to assessment in higher education. Below are a sample of such resources.

Internet Resources
The office of University Planning and Analysis at North Carolina State University maintains a comprehensive list of "Internet Resources for Higher Education Outcomes Assessment." Topics include general resources, assessment handbooks, assessment of specific skills or content, institutions' assessment websites, and accrediting bodies.

National Organizations
The following are national organizations related to assessment in higher education:

State Organizations
The following are state organizations related to assessment in the SUNY system and New York State:

Listserv
The AAHLE organization also coordinates a listserv related to assessment. You can search the listserv archives here or sign up to be a part of the listserv here.

Books

  • Angelo, T.A., & Cross, K.P. (1993). Classroom assessment techniques: A handbook for college teachers (2nd ed.). San Francisco: Jossey-Bass.
  • Bresciani, M. J., Zelna, C. L., & Anderson, J. A. (2004). Assessing student learning and development: A handbook for practitioners. National Association of Student Personnel Administrators.
  • Huba, M. E., & Freed, J. E. (2000). Learner-centered assessment on college campuses: Shifting focus from teaching to learning. Boston: Allyn & Bacon.
  • Maki, P. L. (2010). Assessing for learning: Building a sustainable commitment across the institution (2nd ed.). Stylus Publishing.
  • Stevens, D.D., & Levi, A. J. (2004). Introduction to rubrics: An assessment tool to save grading time, convey effective feedback, and promote student learning. Stylus Publishing.
  • Suskie, L. (2009). Assessing student learning: A common sense guide (2nd ed.). San Francisco: Jossey-Bass.
  • Walvoord, B.E., & Anderson, V. J. (2009). Effective grading: a tool for learning and assessment in college. John Wiley & Sons.
  • Walvoord, B.E., & Banta, T. W. (2010). Assessment clear and simple: A practical guide for institutions, departments, and general education. John Wiley and Sons.

Please email Dr. Judith Horowitz (Judith.Horowitz@fredonia.edu) if you have suggestions for resources to add to this list.

Assessment Office

  • Office of the Vice Provost 803 Maytum Hall State University of New York at Fredonia Fredonia, NY 14063

Take the next step

Request Info Visit Apply