/ Office of the Associate Dean for Teaching and Learning

Student Assessment of Learning and Teaching (SALT)

SALTWant more out of your courses? Add SALT!

View surveys, results, and completion rates.

SALT Procedural Information

Reporting SALT results

Your individual results on SALT will be reported only to you. Data from SALT will be reported publicly within Hope College in relevant aggregates (for example, for introductory lab science courses or for First Year Seminar courses). These aggregate data will be used for departmental, program-specific and college-wide conversations aimed at improving student learning at Hope College.

SALT items that are not relevant to specific courses

Before the SALT is administered for each of your courses, you will be asked to designate each of the items listed in the SALT curricular goal questions as a primary objective, a secondary objective or not an objective of that particular course. The answers to these questions will be used to generate comparative data for cohorts of courses that share the same primary and secondary objectives, so that you can learn from the results. Students will be told, as part of the instructions during the administration of SALT, that not all items are relevant to all courses and that irrelevant items should be marked N/A.

SIR II and SALT

SALT is a course assessment that asks about Hope's specific stated skills and habits of learning. SIR II, a nationally-normed instrument about general teaching behaviors, will continue to be used for teaching evaluation for those anticipating tenure and promotion decisions. We need faculty who are administering SIRs to also administer SALT because we need good baseline data on the course assessment data and so need to aim at gathering data from all coursesStudents who are doing both the SIRs and SALT will be instructed to skip answering the teaching assessment questions on SALT.  SALT contains teaching assessment questions so that faculty who are not using the SIRs can receive information for self-improvement about student perceptions of their teaching. Faculty using the SIRs will receive data from their SIRs about teaching behaviors and will get their individual SALT results about their course's contributions to Hope's skills and habits of learning. Their SALT results will also become a valuable part of Hope College's aggregate data.

Percentile scores vs. Hope averages

In some cases, an individual's course average is above the Hope averageyet they have a percentile score that is lower than 50%. This is not an error, but a mathematical phenomena that reflects the relationship between averages (mean scores) and percentiles (where 50 is the median, not the average). Course averages are affected more strongly by extreme scores (high or low) than are percentiles. If the average varies from the 50th percentile to a significant degree, there must be a few students with relatively extreme ratings pulling the average up or down accordingly.

The average or mean of a set of values is the sum of all of the values divided by the number of values. Many people incorrectly interpret this to be a value that most of the values are relatively close to. This is an incorrect interpretation, however. When some of the values (commonly called outliers) in the set differ greatly from the rest, the mean can be "skewed" by those outliers. For example, if we have 5 data values, 100, 90, 85, 80 and 0, then the mean is (100+90+85+80+0)/5, which is 71. In this case, people might be surprised if you told them that 4 of the 5 values were 80 or above, but the mean was only 71.

The median of a set of values is a value which separates the data values in halves. In other words, there will be an equal number of values above the median and below the median. In the data set above, the median is 85, since there are two values above 85 and two below it. The median corresponds to a percentile of 50; a percentile of 40 means that a given value is above 40% of the data values, and below 60% of those values.

When computing the median, the low values aren't really treated any differently than if they had been just a tiny bit below the median, since median just finds the point where 50% of the values are above and 50% are below.

In the SALT data, if there are outliers, they tend to be values much lower than the others (rarely do most students give a very low rating on a question and then have a few students rate it much higher). This means that if there are a reasonable number of outliers, we may see a mean score which is less than the median. Our above example illustrates this - consider the score of 80. That score is above the mean of 71, but below the median of 85. If you have further questions about this, ask your favorite mathematician.

Assessment and teaching evaluation

Assessment is data-gathering aimed at improvement, in this case of teaching and of learning. Evaluation is data-gathering aimed at performance review within a structure of supervision and such incentives as raises, promotion and tenure. SALT asks about student perceptions of the quality of teaching in their courses because:

  • Teaching and learning are integral to one another
  • Teaching is more likely to continuously improve when faculty know how students are perceiving their teaching

To preserve the privacy of the information, individual SALT results will be sent only to the individual faculty member; aggregate information, which will mask individuals' results, will be made available more widely. Chairs and colleagues may be helpful mentors in improving teaching. If a faculty member were to decide to share his or her results with a department chair of other faculty colleague for mentoring purposes, SALT would remain an assessment tool.

SALT and the old HCTA

SALT is a student assessment of the student’s perception of learning and the teaching that contributed to it. Many SALT questions focus on the overarching objectives of Hope’s liberal arts education. While the HCTA asked mainly about good teaching behaviors, SALT asks about how the course contributed to growth in Hope’s stated skills and habits of learning. HCTA was made available to students at the instructor’s discretion. SALT will be available to all students in all courses, unless the instructor’s dean agrees that a particular case should be an exception.

Reviewing and Interpreting Results

Brief Guidelines

Students’ responses to SALT represent their perceptions of our course and of our teaching. Students’ perceptions can provide useful information to inform and potentially improve our teaching, but they are certainly not the only source. We can improve our teaching based on students’ performance on tests, papers and other assignments; our own sense of what is or is not working in our courses; conversations with colleagues; and readings and workshops related to teaching. Decisions about the effectiveness of our teaching are best informed by converging evidence from these different sources. The guidelines are intended to suggest ways for reviewing and interpreting SALT results that may be helpful for you. They are meant to describe what you can do with your SALT results and not what you should do with them.

  • Before looking at your SALT results take some time to think about what you are trying to accomplish in this particular course. Reflect on how you think the course has gone this semester. Identify two or three aspects of the course that you want to focus on that are reflected in SALT.
  • Use the frequency distributions for each item (rather than just the average) to get an overall sense of students’ responses. Be cautious in comparing your average to the Hope average. [See full guidelines below for examples.]
  • The most informative comparisons (for improvement) are between your averages across semesters rather than your average compared to the Hope average each semester.
  • View the quantitative data and students’ written responses as complementary, e.g., use the frequency distributions to check how many students share the perspective of the written comment of one student.
  • After reviewing your results reflect on what you are learning and take notes so you can use what you have learned the next time you teach the course.
  • Decide what you are going to do. One key to using SALT results is to assess the effectiveness of changes you make in your instruction. Assessing change involves a cycle: check SALT results to identify what you want to improve; implement the change in instruction to make the improvement; check SALT results for the semester after you made the change.
  • It is likely that you will want to assess the effectiveness of a change in your instruction using measures beyond SALT. Consider adding questions to SALT that address specific objectives or instructional techniques of your course. Milly Hudgins at the Frost Center can help you add your questions to SALT.
Full Guidelines: Introduction

Students’ responses to SALT represent their perceptions of our course and of our teaching. Their perceptions can provide useful information to inform and potentially improve our teaching, but they are certainly not the only source. We can improve our teaching based on students’ performance on tests, papers, and other assignments; our own sense of what is or is not working in our courses; conversations with colleagues; and readings and workshops related to teaching. Decisions about the effectiveness of our teaching are best informed by converging evidence from these different sources. These guidelines are intended to suggest ways to review and interpret SALT results that may be helpful for you. They are meant to describe what you can do with your SALT results and not what you should do with them.

There are three sections to these guidelines. The first section includes General Guidelines that describe a series of steps for reviewing and interpreting your SALT results. These guidelines can be used for both the Course Assessment and Teaching Assessment sections of SALT. You need not include all the steps in your review nor follow the steps in the sequence described in the guidelines. The second section provides a description of why we need to be cautious when we interpret comparisons of our individual average to the Hope Average. The third section describes guidelines that are specific to the Course Assessment section. If you are interested in discussing your SALT results with someone confidentially, you can go to SALT faculty consultants to access a list of colleagues who are familiar with these guidelines and who are eager to work with you.

Full Guidelines: General

Step 1: Review SALT Results in Light of Your Instructional Priorities

  • Before looking at your SALT results take some time to think about what you are trying to accomplish in your course and reflect on how you think the course has gone this semester.
  • Based on these reflections identify two or three aspects of the course that you want to focus on or that are especially important to you for this course and that are reflected in SALT, e.g., the “write effectively” item from the Course Assessment section or the “provided helpful feedback on assigned work” item from the Teaching Assessment section.
  • Skim students’ written comments to get a sense of their responses for the SALT items that you have identified.
  • Review frequency distributions for these SALT items to get a sense of all the students’ responses. [You may want to compute the percentage of students who respond “A Great Deal” or “Quite a Lot” on the Course Assessment or “Strongly Agree” or “Agree” on the Teaching Assessment.] If you have taught more than one section of the course you could combine the frequency distributions across sections.
  • Compare the frequency distribution and written comments. When a student expresses a specific concern, you can check the frequency distribution to see whether the student’s written response is articulating a concern shared by other students or whether the student has a genuine concern that is not consistent with the other students’ responses in the frequency distribution. In general, it is more helpful to view students’ written responses and their quantitative responses as complementary rather than as two independent and separate sources of information.
  • Reflect and take notes on what you have learned about the SALT items you have identified. Describe how satisfied you are with each of these aspects of your course.

Step 2: Overall Review of SALT Results

  • Read over the students’ written comments for all the items you did not consider in Step 1.
  • Review the frequency distributions for these items and consider ways in which they are consistent or inconsistent with students’ written comments.
  • Identify the SALT items for which you think you are “on track.”
  • Identify the SALT items that reflect aspects of your course that you think you could do more effectively.

Step 3: Make Instructional Changes and Assess Their Effectiveness

  • Assessing instructional changes involves a cycle: (1) check SALT results to identify what you want to improve; (2) implement the change in instruction to make the improvement; (3) check SALT results for the semester after you have made the change.
  • Identify assignments or instructional techniques that are linked with the SALT items for which you want to assess an instructional change.
  • Consider how you could modify the assignment (technique).
  • Make the change when you next teach the course.
  • Assess the change by comparing your average rating for the previous semester to your average rating for the semester in which you made the change. The cycle of making a change and assessing its effectiveness is likely to be repeated as you refine the instructional change across a few semesters.
  • The most informative comparisons (for improvement) are between your averages across semesters rather than your average compared to the Hope Average. A description of why we should be careful in making comparisons using the Hope Average is included in the second section of the guidelines.
  • Example: Using SALT Ratings for Assessment
      • Assignment Related to Writing Objective – disappointed in how students were writing rough drafts of research report.
      • Modification of Assignment – submit rough draft to instructor and to student partner (as had been done in the past); students were told that the rough draft would be assigned up to 2 points on a 50-point assignment but no specific feedback would be given on the rough draft.
      • Made Change – Spring 2010.
      • Assess Effectiveness of Change – compare average rating for Writing Objective for Fall 09 (3.93) to average rating for Spring 10 (4.14); small increase but in the right direction.
      • Continue cycle in Fall 2010 – emphasize rough draft even more by increasing points assigned to rough draft and providing feedback on rough drafts.
  • Example: Using Measures Other than SALT Ratings for Assessment.
      • Assess Assignments Related to “Weigh Evidence” Objective – class exercises; challenge questions from text; test performance.
      • SALT item for this objective is too general and not likely to be a helpful measure for effectiveness of these specific assignments.
      • Goal of this assessment is to determine how much students perceive that each of these assignments is contributing to achieving the “Weigh Evidence” objective.
      • Add questions to SALT that are directly related to each assignment. Laurie Van Ark at the Frost Center can help you add your questions to SALT. For example, “Degree to which the class exercises helped me to weigh evidence” (A great deal; Quite a bit; Somewhat; A little bit; Not at all; No Response). These questions could also be administered in class or on Moodle.
      • Follow the cycle to determine if each assignment is contributing to the “Weigh Evidence” objective; make modifications in assignments that are less effective than you want them to be; assess effectiveness of change by comparing average change for each question across semesters.

Step 4: Overall Assessment of Value of the Course, Work Load, Overall Average Hours Worked, and Overall Teaching Effectiveness

  • Review the frequency distributions for these four dimensions.
  • Compare your average to the Hope Average to provide a rough idea of students’ perceptions for these dimensions relative to other courses. [Please see the following of why we should be cautious in interpreting comparisons to the Hope Average.]
  • Decide whether you want to make instructional changes based on students’ perceptions on any of these dimensions. If so, you can follow the cycle for making instructional changes and assessing their effectiveness.
  • The goal of these efforts is to communicate with students more effectively about these important dimensions of your course.

Be Cautious when Comparing Your Averages to the Hope Average

Comparisons to an overall average can provide useful information. Health professionals use this type of comparison to determine if an individual is overweight or if their cholesterol level is too high. Holland was identified as a “happier than average” community based on a comparison to a national average. The Hope Average also provides a comparative value for interpreting your averages for the items on SALT. An instructor whose averages on the SALT items are above the Hope Average may feel “happier” than the average instructor. You need to be cautious, however, when you make these comparisons, especially when your average for an item is below the Hope Average and your class size is small (< 30).

The Hope Average is based on literally thousands of students’ ratings but most students respond similarly and so a large number of students report ratings near to the Hope Average. The end result of this lack of variability in students’ responses is that a small difference between your average and the Hope Average can make a big difference. For example, being at the 60th percentile rather than at the 30th percentile suggests a substantial difference, but those two percentiles may reflect only a relatively small difference between your average and the Hope Average. For example, the Hope Average is 3.36 for “Speak effectively,” but an average of 2.95 is the 30th percentile and a 3.68 is the 60th percentile. Be attentive to the absolute size of the difference between your average and the Hope Average when you make that comparison.

Comparing the averages for the SALT items to the Hope Average is also affected by the relatively small samples on which the averages are based. The enrollments in many of our classes are around 30 or fewer students. With such small samples the estimate of the average can be affected by the ratings of only a few students. The impact of an individual student’s rating differs depending on whether the student assigns a relatively low rating or a relatively high rating for an item.

The potential impact of individual ratings on the average for a SALT item can be illustrated with the results from a course that one of our colleagues taught in Spring 2010 with 18 students. The data are for the SALT item, “Presented material in a clear and organized manner.”

  • The frequency distribution was: SA(14); A(3); Neutral(1); D(1); SD(0)
  • The average for this item was 4.58 [Hope Average = 4.08]
  • What happens when you drop lower ratings from the distribution?
      • When the rating for the one student with a Disagree response was dropped, the average changed from 4.58 to 4.72.
      • When the ratings for the one student with a Disagree response and the one with a Neutral response were dropped, the average changed from 4.58 to 4.82.
  • What happens when you drop higher ratings from the distribution?
      • When the rating for one of the students with a Strongly Agree response was dropped, the average changed from 4.58 to 4.56
      • When the ratings for two of the students with Strongly Agree responses were dropped, the average changed from 4.58 to 4.53

Comparisons of your individual average to the Hope Average are most informative when the estimate of your individual average is stable. As the previous example illustrates, individual ratings at the lower end of the distribution can have a fairly large impact on the average for a SALT item. This potential impact indicates that the average may not be as stable as it could be. The possibility that the average for a SALT item may not be stable is one good reason to be cautious when comparing your average to the Hope Average.  When making comparisons to the Hope Average it is important to look carefully at the frequency distribution for the SALT item before interpreting any difference between your average and the Hope Average. This is especially important with smaller enrollments (e.g., < 30).

The importance of looking carefully at the frequency distribution is illustrated by another example from two sections of the same course that one of our colleagues taught in Spring 2010. The data are for the SALT item “"Understand as I read/listen/view.” This was a primary objective for the course (Hope Average = 3.96).

The frequency distributions for the two classes were:

Section 1: Great Deal (2), Quite a bit (7), Somewhat (10), Little bit (1), Not at all (0)

Section 2: Great Deal (4), Quite a bit (10), Somewhat (3), Little bit (1), Not at all (0)

  • The two frequency distributions are similar with ratings of “Quite a bit” or “Somewhat” for 17/20 students in Section 1 and 13/18 in Section 2.
  • The average rating for Section 1 was 3.5 (21st percentile) and the average rating for Section 2 was 4.0 (50th percentile).
  • The absolute size of the difference between the averages for the two sections is only .5.
  • The average of 3.5 for Section 1 corresponds to a rating between “Quite a bit” and “Somewhat” suggesting the objective is being met reasonably well in the course, but the 21st percentile might be taken to mean that there is a serious problem with meeting the objective in this course.

The take-away message from these two examples is that you need to look carefully at the frequency distribution before interpreting a comparison of your average to the Hope Average (or a percentile).

Full Guidelines: Course Assessment

Comparison of Objectives for Skills and Habits of Learning

  • There is an issue that needs to be considered for the Course Assessment portion of SALT. Students’ ratings are somewhat different for the Skill Objectives [mathematics, technology & library, writing, and speaking] from those for the Habits Objectives [weigh evidence, understand as I read/listen/view, understand cultural development, creative and innovative, curious and open, intellectual courage, integrity/compassion/faith]. The table below shows the average ratings for each objective for the primary, secondary and not-an-objective categories for Spring 2010.
Objective Primary Secondary Not-an-Objective Primary-Not
Logic/Evidence
3.98
3.71
3.46
.52
Mathematics
4.32
3.20
1.87
2.45*
Understanding
3.96
3.80
3.73
.23
Technology
4.07
3.34
2.67
1.40*
Writing
4.16
3.59
2.48
1.68*
Speaking
4.06
3.51
2.76
1.30*
Cultural Dev
4.16
3.69
3.12
1.04
Creativity
4.14
3.57
3.28
.86
Curiosity
4.05
3.80
3.64
.41
Courage
4.11
3.90
3.88
.23
Integrity
4.22
3.93
3.70
.52
  • In general, students’ ratings should be higher for primary objectives than for the not-an-objective category. All the objectives show the expected difference. The last column of the table shows the average difference between the primary objective category and the not-an-objective category. The differences for the Skills objectives are identified with an asterisk. The differences for the Skills objectives are larger than all of the differences for the Habits objectives. One of the reasons for the larger differences is that the averages for the not-an-objective category for the Skill objectives are much lower (between the ratings of “a little bit” and “somewhat”) than the averages for the Habits objectives (between the ratings of “somewhat” and “quite a bit”). It seems that students can tell more clearly when a Skills objective has not been covered in a course.
  • The data for both the Skills objectives and the Habits objectives do provide useful information for assessing students’ perceptions of our courses.
  • If you want to assess the effects of instructional changes you have made, however, it will be easier to do that with the Skills objectives.

Integrating SALT Objectives into Your Course and Integrating Your Course Objectives into SALT

  • The SALT objectives are based on the objectives of the General Education Curriculum. Instructors are likely to have objectives for their courses that go beyond the SALT objectives. There are at least two ways to integrate these two different types of objectives.
  • One way is to integrate the SALT objectives into your syllabus to indicate to students how the assignments in your course connect to the SALT objectives. The following table illustrates how one instructor integrated the SALT objectives into the course.
Objective Where Practiced in Social Psychology:
Critical thinking Evaluating studies & theories (TAs, lab paper, exams)
Mathematical thinking Explaining results of your lab, understanding results of other studies
Critical reading with sensitivity Lab design & conclusions, evaluating theories, Paper 1
Written Communication Exams, TAs, Paper 1, Lab Paper
Oral Communication Lab group, class, lab presentation
Analytic & synthetic thinking Seeing themes, developing hypotheses, TAs, Paper 1
Creativity Designing your lab, lab presentation, TA & test answers requiring new examples, Paper 1
Curiosity and openness to new ideas Designing your lab, Paper 1, evaluating theories, class discussions, IAT
Intellectual courage and honesty Talking in class, TA answers, Paper 1
Moral & spiritual discernment & responsibility Evaluating one's own values, biases (IAT, TAs)
  • You can also integrate your own course objectives with the SALT objectives by adding new questions assessing your own objectives to the SALT form.  Laurie Van Ark at the Frost Center can help you add your questions to SALT. These questions could also be administered in class or on Moodle.

Other SALT Resources

Tips from Faculty to Increase SALT Participation

  • I explained to them up front about how their feedback helps me know what works and does not work in class including a couple of examples of how I've changed my class based on previous feedback. I also keep them up to date on what percentage of the class had replied and said the goal was 100% participation. No special deals were offered and the only class time was a couple of minutes at the beginning of class about three different times.
  • I did nothing “special” but did engage my students in conversation about SALT and encourage them to participate. They had some questions and we did spend 10-15 minutes in dialogue about them but that was about it. I did start encouraging them early though and they were looking for the site even before it was up and running.
  • I had the computer in my classroom and in my office on the SALT link. On the second-to-last class I let each student cycle through the computer in the room as well as the one in my office as I was teaching. They each stepped out of class for the few minutes it took to complete the survey. I did tell them on the previous class (Monday) they would do the SALT on Wednesday and my SIRs on Friday. This kept the evaluations to one per day. Since they were using class time they all participated without question.
  • I sent my students one email describing the bonus (see below) and mentioned it briefly in class but did not use any class time for them to do it. The bonus points were based upon the number of students who participated and my intent was to unleash the power of peer pressure. Evidently it worked as even the one student who missed class about 80 percent of the time still did the online SALT survey. I had included a statement regarding assessment in my syllabus at the beginning of the semester in response to the Provost's email requesting that faculty do so. Thus I was able to refer to that statement in my email to students. Statement in the e-mail: “Since we are engineers, let's write a mathematical expression ... For N > 20, if exactly N students complete the SALT survey by the deadline this Friday, I will give an (N-20) point bonus on Exam 3 to each student in the class. There are 27 students in the class. You do not need to give me anything as the system counts the number of responses received. You are welcome to provide encouragement to your fellow students to complete the survey. I will provide updates on the number of students completing the survey in the next few days leading up to the deadline. I value knowing your views of this course and will be able to consider SALT data as I consider what changes to make the next time I teach it. These data also may benefit the engineering department as a whole as evidence for the continuous improvement of our program, which is a factor supporting our ABET accreditation. Thank you for taking the time to participate in SALT.”
  • In the nursing department we have students in all of our courses complete on line course evals. To make sure that students do this, we schedule part of our last class day in a computer lab so we have them "captive," so to speak, and know they have completed the online course eval, since we need a high percentage of participation for our accreditation processes. So while I had the students in the computer lab, I also asked each one if they had completed the online SALT, as well. Many, however, told me they had already completed it prior to that day when they received your email with the link, so I think your email was a big contributor to the success, also!
  • My course meets for all contact hours in a computer lab, so I am able to remind them to do the assessment and say "you can do it right now." There was also an incentive for 100% response rate (everyone got extra credit) in the class. I think I also sent an email or two to remind students.
  • I went over the SALT in class (even though they got mad at me and said "we already know how to do this") and had a "devil's advocate" discussion in class the day they were supposed to do it. I know that reinforced the importance of doing it and got them charged up enough (=a bit mad at me) to want to do it. I checked the response rates and let them know how they were doing. When there were still two "hold outs" I promised home made cinnamon rolls if those two did it! The whole class took them to the computer lab after class that day. (I know this is bribery, but it worked.)
  • In my case, we have a good chunk of time we spend in the CAE lab using the computers, so it was easy to encourage them to simply get it done. I did discuss it with them saying that they have been chosen for an important study and that they should take it seriously... nothing special.
  • My asking my class got 6/23. Offering 1 point of 400 got 16/23; three more reminders (thanks to CIT too) and the numbers rose to 21/23.
  • No incentives. I either sent them an e-mail or told them in class how important the survey was, and how important that we hit 100%. I then followed up a couple of times — e-mail and in class— to let them know there were still N who hadn't yet responded, and gave them a deadline.