Instructional Assessment Initiatives and Challenges

During spring term, fellow librarian and resident assessment expert John Cocklin and I developed a series of professional development activities related to the assessment of library instruction. We read and discussed a relevant article (Battersby, M. (1999). “So, What’s A Learning Outcome Anyway?“), hosted a hands-on workshop for library staff, and developed a research guide for library staff and others to expand their knowledge of various instructional assessment techniques.

As the culminating event of the series, we also invited guests from Saint Anselm and Keene State to speak on their experiences, and  I wrote up my notes from the presentations on the Dartmouth Library Blog, Library Muse.

Kathy Halverson and Jeff Waller were very kind to share their time and perspectives on instructional assessment and also librarianship in general, and I learned a great deal from their presentations. What struck me immediately about the differences in the approach to instructional assessment at their two institutions vs. my place of work is the difference in culture and attitude towards standardized assessments. In general, our faculty do not value standardized testing as a method for assessing student learning. Certainly there are problems with standardized testing as a primary approach to classroom assessment, as documented in the K-12 U.S. education system. However, as our guests made clear, there are established methods for evaluating student learning of key critical thinking/information literacy skills. And it can be helpful to use standardized methodology in order to compare classes or cohorts among institutions.

Of note, the initiatives to measure student’s information literacy at Keene State and Saint Anselm were both put forth by administrators as part of broader goals to present evidence of educational or curricular achievement. Because we have not had the same type of top-down mandate, our efforts in the library to begin exploring methods for assessing student performance must be more sensitive to faculty and administrative authority.  We have greater leeway to experiment with various types of assessment provided we can make the case to individual faculty members and departments. However it will not be possible to present a unified picture of student’s information literacy learning (or the impact of library education efforts to increase their skills in this area) without some type of comparable results across courses, departments, or classes of students (i.e. freshmen vs. seniors).  Without a mandate from above, it is unlikely that faculty will broadly implement assessment measures that can be used to measure student learning at the institutional level.

But broad assessment measures do not have to be in the form of an over-simplified standardized test.  Personally, I prefer the use of a rubric to a standardized test, as it combines the standardization element with an authentic use case. Rather than having students take a separate test for the sole purpose of evaluating them, faculty can create an assignment which they then use a standardized rubric to evaluate student’s application of information literacy skills to a project. When carefully constructed, such a device can be used to assign a grade to the assignment, provide constructive feedback to the student, and simultaneously compare one student’s learning with those in her group, course, or even across classes using the same rubric to evaluate different types of assignments. I have been searching for such tools and trying to develop rubrics from my own experience, and will continue to post on this topic.

Share this on:
  • Print
  • del.icio.us
  • Facebook
  • Twitter
  • Google Bookmarks
  • RSS
  • email