Fall 2002 SRTE Scores

Printer-friendly versionPrinter-friendly version
September 2003

This summary is intended as a complement to other analyses that have examined issues such as the statistical reliability and validity of Penn State’s Student Rating of Teaching Effectiveness. It may also inform discussions of policy and praxis in the evaluation of teaching at the University.

Fall 2002 Data

In brief, the linked tables and graphs[PDF] show that:

  • University-wide, the average score for “quality of course” is 5.5, and for “quality of instructor” the average is 5.7 (on a seven-point scale).

  • There is relatively little variation among averages when the data are sliced in the obvious ways (rank, gender, appointment type, college, and so on). For example, for averages at the university level:

    • There is no difference for assistant professors versus associate professors versus
       professors.

    • There is essentially no difference in the average scores between standing and
      fixed-term faculty.

  • There is a bit more variation on two dimensions:
    • Teaching assistants on average score about .3 lower than regular faculty.

    • There’s also some variation across colleges and campuses; the average scores by college are all in the range of plus or minus .4.

  • Consistent with other research (see below), ratings are positively related to course level. Graduate courses are rated about .3 higher than undergraduate courses, on average. Upper-division undergraduate courses are rated about .1 higher than lower-division undergraduate courses.

  • Probably the most commonly asked question about student ratings concerns whether such ratings are related to grades. In other words, do instructors who are easy graders get higher student ratings? The data for Penn State’s SRTE are consistent with an extensive research literature on this question, to wit: student ratings are mildly and positively correlated with grades . That correlation only explains about five to ten percent of the variation in SRTE scores - and it can be argued that such a connection is reasonable, if one believes that more effective teaching, higher grades, and superior student ratings should go hand-in-hand. In any case, four graphs illustrate the distribution[PDF] of Penn State’s SRTE scores by expected grade in Fall 2002.

Additional Information

As mentioned above, other Penn State reports have investigated other aspects of the SRTE. An excellent guide for use of the SRTE, along with discussions of alternative evaluation methods, can be accessed through Penn State’s Schreyer Institute for Teaching Excellence.

A 2003 report to the University Faculty Senate provided information on how the SRTE is actually administered by departments, and makes recommendations about those matters. It is available online through the University Faculty Senate.

A 1997 report to the University Faculty Senate addressed some of the more frequently asked questions about the SRTE’s statistical and methodological properties, and what the relevant research literature has to say about these matters. That report can be accessed on the website of Penn State’s Office for Planning and Institutional Assessment.