Program Assessment

Assessment of Student Learning @ Penn State

Penn State launched in 2006 a procedure to assess student learning across its Major degree programs. Since then the Assessment Coordinating Committee and the Schreyer Institute for Teaching Excellence have been providing support (i.e., events, meetings, and consultations) to the department and program assessment committees and facilitators who assumed the responsibility for the assessment procedure.

1st Phase of the Assessment Procedure: The departmental or program committees  developed an assessment plan in which they identified the program goals and selected one program objective to examine the degree to which students in the program attain the objective. Also, the committees specified in the plan the types of evidence they would use to document students’ progress towards the selected objective and the methods that the committee would apply to collect evidence of learning.

From the position of a graduate assessment consultant I developed informational resources to introduce the faculty to the program assessment procedure. Specifically, I created the following three resources, which were incorporated in the handouts for the assessment conversations and Penn State’s assessment website:

1) A graphic representation of the Program Assessment Cycle  that aligns with Penn State’s approach for learning outcomes assessment.

2) A summary of the different types of evidence that can be used to document student learning with respect to a specific program objective. The examples of possible types of evidence were compiled based on reported evidence that programs inside Penn State and in other Institutions used or plan to use for assessment purposes.

3) A rubric that can be used to provide feedback about the quality of the program assessment plan developed by a department or program committee.

Internal Assessment @ The Schreyer Institute for Teaching Excellence

Consultations lie in the core of our practice as faculty developers. However, the evaluation of the consultations is neither common nor consistent practice in our field.     Colleagues have urged us to proceed in thorough and systematic assessment of consultations that “will allow us to improve our practice, document the impact of our services, and support the scholarly foundations of our profession” (Rohdieck, Plank, & Kalish, p.1, 2011).  Several methods have been used to assess consultations including exit surveys, semester or annual surveys, audio and video recordings of consultations, peer observations and journaling. The method varies depending on the goals of assessment and the stakeholders who will review and discuss the results.

The Development of the Consultation Feedback Form: In Fall 2012, I worked on designing a Consultation Feedback Form for the Schreyer Institute for Teaching Excellence in order to assess the quality of the consultation services and gauge the impact of the consultations by surveying:

  1. The client’s satisfaction with the behaviors and skills of the consultant.
  2. The client’s perceptions about aspects of the consultation (i.e., process & shared resources) that made the consultation experience positive, resourceful and effective.

To design the Consultation Feedback Form I followed a systematic process that would directly involve the Institute’s consultants in the development process. In order to develop the feedback form:

  1. I reviewed the literature pertinent to the evaluation of educational development programs offered at teaching centers with a special focus on publications that targeted the assessment of consultations.
  2. I posted a comment on the POD listserv to collect currently used consultation feedback forms in other institutions and I received four consultations forms for review purposes.
  3. I prepared a focus group protocol that I used as a guide for conducting the focus group with five of the eight consultants at the Schreyer Institute to discuss the consultants’ expectations and ideas about the consultation assessment process.
Chism, N.V.N., & B. Szabό (1997). How faculty development programs evaluate their services. Journal of Staff, Program, and Organization Development, 15(2), 55-62.
Jacobson, W., Wulff, D., Grooters, S., Edwards, P., & Freisem, K. (2008). Reported Long-Term Value and Effects of Teaching Center Consultations. In L. B. Nilson & J. E. Miller (Eds.), To Improve the Academy: Vol. 27 (pp. 223-245). San Francisco: Jossey-Bass.
Milloy P. M., & Brooke C. (2004). Beyond bean counting: Making faculty development needs assessment more meaningful. In C. Wehlburg & S. Chadwick-Blossey (Eds.) To Improve the Academy, Vol 22. San Francisco: Jossey-Bass.
Plank, K. M., & Kalish, A. (2010). Program assessment for faculty development.  In K. Gillespie, D. L. Robertson, & Associates (Eds.), A guide to faculty development (2nd ed.). San Francisco, CA:  Jossey-Bass.
Rohdieck, S.V., Kalish, A., & Plank, K. M. (2012). Assessing Consultations. In K. T. Brinko, (Ed.), Practically speaking: A sourcebook for instructional consultants in higher education (2nd ed.). Stillwater, OK: New Forums Press.