Assessment is Not a Survey: Some Things to Consider in Your Daily Exchanges

This review is here to help take some mystery out of assessment lingo. As mentioned in the title, a survey is an assessment tool and not all of the assessment. Other assessment tools might be pre- and post-tests, rubrics, quizzes, observations, and interviews. The list of tools is a long one, however, the tools are not the assessment itself. Hopefully this outline can serve as a primer, or a reminder, of some of the language of assessment so that in our daily exchanges we can sound assessment savvy!

Assessment, Evaluation, and Research – Oh My!

There is a difference between these terms, and in this case, semantics matter. There are, of course, classes, books, and professionals devoted to each one of them. You are better served to know the variations and when to use which term. Data and information are part of all three, which makes it easy to combine the three. What separates assessment, evaluation, and research is the purpose.assessment graphic

Assessment can be simply remembered as the collection and use of information or data to improve something. It can also be broadly applied as an umbrella of activities and actions, and at times, those activities and actions might look like, or even be, evaluation or research.

Evaluation can be thought of as the collection of data or information to judge the quality of something. Evaluation could lead to an improvement and the judgement might be of use to someone, but that is not always the original intent. Conversely, research is the collection and use of data or information to inform and learn. Research can create new knowledge or confirm a theory but is not meant for continuous improvement or judgement.  

Therefore, if you focus on the purpose you can better chose the word that fits your intention:

  • Assessment – Improvement
  • Evaluation – Judgement
  • Research – New Knowledge

Is It Indirect or Direct? That is The Question.

The data we collect is important. Data informs our work, helps us improve our practice, and assists in our understanding of the contribution we make to student success. Do we know what kind of data we are using?

Indirect data is the result of an assessment process which asks subjects to reflect upon their knowledge, behavior, or thought processes. An example might be a survey question that asks, “How useful was the training to your role as a student leader?” Direct data is a result of an assessment process which requires subjects to display their knowledge, behavior, or thought processes. Direct data might be an advisor’s observation of a student leader navigating a challenging situation.

When collecting data, ask yourself if you need to see or observe the learning or outcome to know it exists. Can you get a solid understanding of the learning or outcome by asking a student about their perception of it? If you need to see the learning or outcome to know it exists, you’ll need a direct measure; if not, an indirect measure is a remaining option. Note that indirect measures, like a survey, in some cases may feel like the only and easiest way, but that may not be true. Good survey development takes time and getting reliable data is often challenging. The gold standard is use a direct measure when you can, but sometimes indirect measure may be all we can get.

What Kind of Outcomes Are These Anyway?

When assessing outcomes, it is helpful to consider learning, programs, and operations.  Learning outcomes can describe significant and essential learning that students have achieved. Generally, learners can reliably demonstrate what they gained through a course, program, or activity. Knowing what learning should happen can easily result in a record of learning outcomes. Learning outcomes are not limited to a classroom setting. Students learn “on-the-job” skills as employees and leaders that they can then demonstrate. Knowing what the program or activity is intended to transfer to a student results in the learning outcome.

A program outcome addresses the “what and how” a program contributes to students or an institution. Program outcomes may include a level of performance of the program. One result of having a program is that the student or institution can translate a tangible result. A program outcome looks at a cumulative effect. Leadership programs have overarching outcomes where workshops may have more targeted learning.

Operational outcomes address regular or procedural tasks that relate to a service or product. Operational outcomes can typically be counted (i.e., number of users). We need to collect operational outcomes to better know if we need to make changes in our daily functions and operations. Patterns in operational data helps inform the work we do. Each type of outcome has an important place in assessment and can help share your story! In many cases, we need to be using them all.

Is This Reliable or Valid?

Knowing if the data we are using is valid, reliable, or both is important. But are these two different? Validity is having the quality of being logically or factually sound; having soundness or cogency. What validity tells us is that the data represents what we think it should. It is valid. Validity can tell us how well an assessment tool measures what we think it should. Reliability is having the quality of yielding the same results on repeated trials. Data that is reliable tells us that we can expect it. Students may reliably attend a workshop but the attendance may not validate that the presenter shares the information.

Are We Forming or Summing?

The last set of assessment terms are formative and summative assessments. Formative is the type of assessment that monitors student learning through ongoing feedback to improve an instructor’s teaching or a student’s learning. Formative assessments might look as simple as a show of hands or sharing or more complex like a presentation or non-graded quiz. Summative assessments evaluate the outcomes at the end of a cycle or specified unit of time or work. Workshop evaluation and final exams are examples of summative assessment tools. Both assessment forms are useful for gathering information and data, and knowing which type may influence the interpretation and application of data.

Load more comments
comment-avatar