[The Montana Professor 25.1, Fall 2014 <http://mtprof.msun.edu>]

Performance Based Funding: A Brief Personal History

Marvin Lansverk, PhD
Professor of English
MSU-Bozeman

—Marvin Lansverk
Marvin Lansverk

The national experiment with Performance Based Funding has finally come to the Montana University System in earnest. Though implemented for years in other states with mixed success (notably, Tennessee, Ohio, and Kansas, and now many others), Montana has joined the bandwagon. With what success remains to be seen, but as with many major initiatives in our complex and diverse university system in these polarized political times, seeing clearly just what is at stake is important if we are to achieve our common goals of continuing to improve Montana's higher education opportunities. Mutual understanding is important, so that at least we know what each other is talking about, because different constituencies approach many issues in higher education in vastly different ways. What follows, then, is a brief attempt to sort out the history, the various constituencies involved, the promise, and the problems with Performance Based Funding, so that no matter which direction you approach this from, you will be better able to read between the lines of the language of its promoters and detractors.

I remember the first time I encountered the concept of performance based funding. It was at a Board of Regents meeting in Billings, five years ago, Sept 23, 2009. Dennis Jones, of the National Center for Higher Education Management Systems (NCHEMS) had been hired as a consultant by our Board of Regents and he was making a presentation on the second day of the meeting, as he continued to do throughout the next couple years. Many of his Power Point slides set Montana statistics against national data—something NCHEMS has been doing for years. NCHEMS was an early player in creating and using new statistical metrics for understanding trends in higher education, especially trends in population and funding, to aid in making data driven decisions, akin to the increased use of Big Data in baseball—fascinating, though I'm not sure the movie version will ever compete with Moneyball. What really grabbed my attention though, were the set of recommendations at the end of Dennis Jones' presentation. It was an eclectic mix of suggestions to gain efficiencies in higher education, especially in an era of decreased public funding. Among the ideas was to begin employing some percentage of performance based funding mechanisms when state money was distributed across campuses. Another suggestion for increasing efficiency—and one I remember very well, since I was attending the Board of Regents meeting in the first place because of my role as Chair-Elect of the Montana State University Faculty Senate—was to reduce wasted time and duplication by not having faculty participate in meetings such as the one I was attending. Jones explained that valuable faculty time shouldn't be used to participate in, or monitor administrative decision making: that's what administrators were paid for in the first place. Another suggestion for Next Generation higher education, among others, was to move more to an educational system where highly trained faculty spent less time in the classroom and more time just designing curriculum, which could then be delivered by T.A.'s and other much lower paid adjunct faculty, under the supervision of tenure track faculty who would be consultants, more than hands-on teachers. I'm not sure how the research component of our job fit into his views.

While Dennis Jones was making his presentation, I remember going to the NCEHMS website to see who he was. As their site explained, NCHEMS "is a private nonprofit (501)(c)(3) organization whose mission is to improve strategic decision making in higher education for states and institutions in the United States and abroad." And on the "President" page, Mr. Jones' background was described; he was trained as an efficiency engineer, with a MS in management engineering. It made sense. Efficiency is an important goal in any complex system: "Education for Efficiency" actually used to be the MSU university motto, though MSU abandoned it years ago as the institution grew and became more than an agricultural and engineering school. Fortunately, I remember thinking, by Board of Regents policy, the Montana University System is committed to principles of shared governance: even when inefficient from one point of view (I could have been grading essays instead of attending a two-day Board of Regents meeting), our own BOR has long recognized that since universities aren't top down hierarchical corporations, our education institutions need cooperative participation from all stakeholders in order to optimize decision making—the very definition of shared governance. Optimization vs. efficiency. I needed to think more about that.

But what about this performance based funding idea? Might it play a useful role in our future funding processes? We were about to find out. The idea was soon championed by, then, new regent Todd Buchanan, who in his early years on the Board, created a process that came to be known ambitiously as "Reinventing the University:" it was a Regents' Workgroup, created in 2009, to explore a number of ways to best reorganize our university system for the next century. Eventually, I was appointed, along with a couple other faculty representatives to one of this group's subcommittees, and after a few year's work, which included more meetings with NCHEMS consultant, Dennis Jones, the result evolved into what the regents were by then calling their "Success Agenda," which was adopted and made a part of the BOR's strategic plan. The ten-point Success Agenda captured a number of the Regents' existing priorities, and number eight on the list was: "Performance-Based Funding," itself comprised of three elements: "Align targeted outcomes with institutional type through purposeful allocation of resources based on programming type. Associate achievement in key performance areas with aspects of funding (allocation model). Define, measure, and reward success by institution."

It was done. PBF was now part of the Montana University System strategic plan. But what exactly was it? Soon after the Success Agenda was adopted, the Montana State University Faculty Senate appointed a committee to ask just that. Being comprised of research faculty, they did what research faculty do, and reviewed the published, peer-reviewed literature on performance based funding, eventually summarizing their findings at the Board of Regents meeting in Missoula, in November 2010, in a report included below. I participated on this MSU Faculty Senate committee and used the opportunity to read up on the subject. One of the difficulties we found was that much of what was available generally on the web (as opposed to the academic literature) was the product of promoters of the concept. What we were looking for was hard data on results, and a hard boiled assessment of the promises and problems. And fortunately, because of the longstanding, ongoing experiments with PBF in other states, useful data is emerging that should continue to help Montana shape its version.

Because the report captured many elements of what continue to shape our discussion of PBF, I include it here.

To: Montana Board of Regents
Commissioner of Higher Education

From: MSU Bozeman Performance Based Funding Faculty Senate Task Force
RE: Development of PBF Proposals
Date: November 19, 2010

We hope this document will help both the Board of Regents and OCHE in their continued investigations into PBF models and in the development of a PBF plan best suited to the MUS. This is not meant as a comprehensive proposal or a polished "white paper" representing the views of the MSU Bozeman faculty. Rather, it is meant as a working document that tries to bring together current data on PBF and our thinking about what approach can best be implemented. It is offered in the spirit of constructive shared governance, attempting to combine peer reviewed academic analyses, national data, and best practices gleaned from the many experiments with performance based funding in other states and countries over the past ten years.

Recent literature on Performance Based Funding/Accountability

Performance based accountability grew at higher education institutions during the 1990s and into the 2000s. In 2003, forty-four states had adopted some form of performance reporting, budgeting, and/or funding. However, recent research has shown that these policy tools have not resulted in a corresponding increase in institutional performance. A survey conducted in 2003 found a decline in performance budgeting and performance funding in the US, but an increase in performance reporting (Burke & Minassians, 2003). Early recommendations for performance reporting, budgeting, and funding included allowing public colleges and universities to choose a few indicators that reflect their special goals or missions as well as having some state-wide common indicators to reflect shared goals (Burke & Minassians, 2002). Most states with performance based funding/budgeting link less than 6% of the budget to performance and punitive measures are rare (Goldstein, 2005). Two research articles published in the past two years are especially illustrative of the outcomes of performance based funding/budgeting. A longitudinal study of data from 1997 to 2007 at 467 institutions found that states adopting performance-based accountability did not see an increase in institutional performance (Shin, 2010). Another longitudinal study using data over 12 years found that strong state control in areas such as performance based funding did not increase graduation rates (Volkwein & Tandberg, 2008). Shin notes that institutional characteristics explain performance and suggests that policy-makers work to change those factors that have a more significant impact on performance, such as tuition rates for incoming students, instructional expenditures per student, and student-faculty ratio.

Areas of stakeholder agreement

The following are areas where we believe there is substantial agreement among the various stakeholders in the PBF discussions. We offer them here to aid future discussions.

  1. Serving the students and families of Montana is a top priority.
  2. We understand the vital importance of making the most of limited state resources; and the MUS efficiency ratings already show that we are among the best in the nation at doing so.
  3. We also understand the importance of continuing to make the case to the public that we are indeed good stewards of state resources.
  4. It is important that any PBF proposal be sensitive to varying individual missions of the different units.
  5. Thus, it is critical that performance measures for MSU-Bozeman also include those that accurately reflect the value of the research and creative mission of the university. This mission includes the creation of knowledge as well as the creative engagement of our students in the research enterprise. Our faculty includes those who do pure and applied scientific research as well as those whose focus is artistic creativity. Our students are unique in having the opportunity to learn first-hand how knowledge and works of art are created. This is an important reason why many of our students come to this institution and it is the force that propels the best of them into creative careers of their own.
  6. We recognized the importance of unit-driven prioritization of programs. But prioritization must also take account of the unique missions of the various units. While we can't all do everything, nevertheless, the mission of the comprehensive universities is by definition comprehensive (we are not just large poly-technical institutes). This said, targeting some programs for critical investment and growth, while pruning or eliminating others in decline, is and should be an ongoing and process.
  7. Substantive program review is the only rational way to determine what should grow and what should go. All units need to do a better job at this and demonstrate to stakeholders that it is being done well. Performing the reviews and making decisions based on them is the job of the academic units. Ensuring that it is done adequately is the job of the regents. This also points to the need for more discussion of the concept of duplication. What exactly does duplication mean? What is necessary duplication? What is unnecessary? What does duplication mean in the context of the online environment?
  8. Open discussion of the merits of PBF and any specific proposals, needs to begin now and will need to continue into the indefinite future. Such discussion will require rich information flow as the implementation proposal develops, so that faculty and the units can adequately examine it. As the BOR's own Success Agenda requires, there must be time for adequate consultation with all stakeholders.
  9. Any PBF model, or attention to the allocation model in general, should focus on whole university budgets, and not just the academic side; in fact, protecting the academic core mission should be an absolute priority. Thus, administration and support expenditures must also be pruned where possible and efficiency encouraged at every level.
  10. In this national/international environment, we must continue to find ways to encourage, and not discourage, interdisciplinary interaction.
  11. In the development of any PBF proposal, we must try our best to anticipate negative consequences and avoid unforeseen consequences once they become apparent.
  12. We must take advantage of existing studies of PBF, recognizing the successes and failures in the implementation of PBF elsewhere. Since other states have tried this for a number of years, we need to learn from their successes and their mistakes.
  13. It is incumbent upon us to try to follow best practices in PFB, and its implementation, taking advantage of the advice and knowledge of various authoritative sources of information, including the Association of Governing Boards (AGB), The American Association of University Professors (AAUP), and the examination of university funding models by scholars in the field of higher education administration.
  14. As OCHE already seems to be doing through its participation in the national data standards group, any performance metrics used cannot be only the longstanding, convenient, blunt instruments often used in the past, but must adhere to the best national standards. In other words, decisions must be based on good data, not just easily accessible or "cheap" data. In addition, collection of data, including on various completion measures, must be done before any attempts are made to change completion rates. We need to have a baseline from which to measure our progress.
  15. The AGB has many best practice methods for taking better advantage of cost data (among other things). And best practices often emphasize the importance of putting resources into collecting this data and then making it available to the units, with mandates that it must be used in decision making. Best practices also warn boards against encroaching on administrative and institutional prerogatives in using these data.
  16. Any PBF budget allocation model must be able to take into account and incentivize quality as well as quantity.
  17. A full discussion of PBF should include a discussion of the vision we all have for the MUS in the near, middle, and long term.
  18. As part of this, the vision of the university that is embedded in the NCHEMS specific recommendations and their implications should be openly debated, since no budget proposals are completely neutral; all budget proposals are based on and ultimately impact vision.

Preliminary suggestions for PBF proposal

  1. For the sake of clarity, PBF proposals should be separated from "reallocation" proposals also under development and discussion.
  2. The MUS should follow the national best practices which strongly suggest using the version of Performance Based Funding known more specifically as "Performance Based Reporting."
    1. Sharing information among institutions on strategies to increase graduation rates should be encouraged.
    2. Making institutional graduation rates and other agreed upon metrics transparent and widely available to the citizens of Montana should become standard practice.
  3. As part of a Performance Based Reporting regime, units should set goals, and regents should then hold institutions accountable.
  4. Regents should also incentivize (i.e., mandate) that institutions improve and/or develop real, substantive, data driven program reviews. Regents might also identify other areas to incentivize units to improve, including better advising, better evaluation of teaching, etc.
  5. As an experiment, a 1% incentive slice (preferable consisting of new money) should be made available in the MUS budget. Most importantly, it should not be aimed at one single metric (e.g., retention rates), but should somehow reflect unique missions of the units.
    1. Furthermore, the monies from this incentive slice should be distributed not at the unit level, but at the UM/MSU level, for presidents to subsequently allocate, based on internal measures of progress.
    2. Ideally, any performance slice would also be tied to accreditation incentives already built into the system, which are well institutionalized, and which themselves focus on outcomes based assessment.
    3. The MUS should explore the use of "performance based grants" as well, even down to the level of individual faculty. Too often, PBF incentive programs don't reach those whose behavior is directly responsible performance improvements.
    4. The incentive slice should not be larger than 1% to avoid deleterious effects of "whipsawing" and budget instability among the MUS units that will impede effective strategic planning.
  6. In the longer term, the regents might explore incentivizing the use of benchmarking, perhaps down to the department level, in the case of the research institutions. This version of PBF, where departments identify other departments at other institutions to compare themselves against, has shown to have great promise, nationally. One advantage of this method is that it allows for performance measures to be individualized. It also helps ensure that quality is still taken into account. Colorado State said their system has employed this method with great success.

Miscellaneous other observations

  1. Six-year graduation rates, even four-year graduation rates, are a problematic metric for a number of reasons. First, by definition, they are a lagging indicator. Furthermore, if graduation rates are used, they must be assessed outside of individual institutions, since many students complete college but not necessarily at a single institution.
    1. Graduation or retention rates alone cannot capture mission differences, such as selectivity, special programs, student learning outcomes, student intentions in the first place, or differences in access, including financial factors, all of which are important determinants of graduation and retention rates.
  2. Although well meaning, some attempts at implementing PBF nationwide have led to an encroachment by boards into the academic purviews (AAUP "areas of primary responsibility"; AGB best practices) long accepted to be the job of administration and faculty.
  3. There are already significant performance incentives in place throughout the university system. These should be studied, well understood, well articulated, and possibly enhanced before attempting additional reforms.
    1. Large institutional incentives already exist for improving retention. Adding additional small institutional incentives to existing ones is more symbolic than practical.

Citations

The citations of these articles and brief summaries of the most recent research studies follow.

Burke, J. & Minassians, H. (2002). Reporting indicators: What do they indicate? In J. Burke & H. Minassians (Eds.). Reporting Higher Education results: Missing links in the performance chain. New Directions for Institutional Research, 116, 33-58. Authors wrote about indicators being used by institutions for performance accountability. They found increased emphasis on total enrollments, student diversity, tuition and fees, financial aid, and access. The authors made recommendations which included allowing public colleges and universities to choose a few indicators that reflect their special goals or missions as well as having some state-wide common indicators to reflect shared goals.

Burke, J., & Minassians, H. (2003) Performance Reporting: "Real" Accountability or Accountability "Lite." Seventh Annual Survey 2003. The Nelson A. Rockefeller Institute of Government. Conducted a survey in 2003 and found a decline in performance budgeting and performance funding in states, but an increase in performance reporting.

Curtis, John. (2007). Director of Research and Public Policy American Association of University Professors, Washington, DC. "A Faculty Perspective on Accountability." Presentation to the Pennsylvania State Conference of the American Association of University Professors College Misericordia, May 5 [Power Point].

General Accounting Office report (GAO-03-568). (2003). College Completion—Additional Efforts Could Help Education With its Completion Goals. United States General Accounting Office Report to Congressional Requesters, May 2003. GAO-03-568, a report to the Ranking Minority Members, Committee on Health, Education, Labor, and Pensions, United States Senate, and Committee on Education and the Workforce, House of Representatives. More than half of all students who enrolled in a 4-year college completed a bachelor's degree within 6 years. Students were less likely to complete if neither parent had completed a degree, they were black, they worked 20 or more hours per week, or they transferred to another college. Students had a greater likelihood of completing if they were continuously enrolled, attended full-time, or had more rigorous high school curriculum.... States are beginning to hold colleges accountable for retaining and graduating their students, and [the US Department of] Education has been discussing this with the higher education community. Many states are publishing retention and graduation rates for their colleges, and some have tied performance in these areas to funding. According to [the US Department of] Education, providing information on colleges' retention and graduation performance can help prospective students make informed decisions. However, the measure used by [the US Department of] Education may not fully reflect an institution's performance because institutional goals and missions are not captured in the measure.

Goldstein, L. (2005). College and University Budgeting: An introduction for faculty and academic administrators. Washington, D.C: National Association for College and University Business Officers, Publishers. This book was written for faculty and academic administrators and is thorough coverage of college and university budgeting and resource allocation.

Hauptman, Arthur. (2005). "Performance-Based Funding in Higher Education." Financing Reforms for Tertiary Education in the Knowledge Economy, Seoul, Korea [Power Point].

Shin, J. (2010). Impacts of performance-based accountability on institutional performance in the U.S. Higher Education, 60, 47-68. Shin analyzed Integrated Postsecondary Education Data Systems (IPEDS) data from 467 higher education research universities, masters institutions, and liberal arts colleges across the U.S. He examined variable outcomes on teaching (graduation rates) and research (external research funding) at these institutions. The main finding of this study was that states which had adopted performance-based accountability did not see a noticeable increase in performance. Variables that did impact graduation rates included faculty-student ratio, instructional expenditures per student, incoming student achievement, in-state tuition rates, and dorm facilities. Variables impacting research included institutional mission, size of faculty, graduate programs, and staff devoted to research.

Volkwein, J.F., & Tandberg, D.A. (2008). Measuring up: Examining, the connections among state structural characteristics, regulatory practices, and performance. Research in Higher Education, 49, 180–197. Volkwein and Tandberg used data from the state report cards on higher education outcomes (Measuring Up) from the years 2000, 2002, 2004, & 2006 to examine the improvement in state level performance indicators over time. The study looked at the outcomes of 1) Affordability; 2) Benefits; 3) Completion; 4) Participation; and 5) Preparation. Using advanced statistical techniques, the authors found that state characteristics (such as population, resident income levels, levels of wealth & poverty, and population growth) were more likely to explain changes in the outcomes than were variables associated with state governance of higher education (such as centralized control over higher education institutions; performance based budgeting; level of institutional autonomy). In examining the outcome of completions (graduation rates), the only variables that predicted a positive change in graduation rate over time were those that increased accessibility of higher education, such as increased numbers of high school graduates, lowered tuition rates, decreased numbers of high school students leaving the state, and the proportion of enrollments in private higher education. Over time, rising tuition costs are associated with lowered graduation rates across a 12-year period, but state control of higher education (such as performance based funding) did not improve completion rates across states.

This memo was shared and discussed with the Board of Regents at their next meeting in Missoula. One outcome was the general agreement on the principle that movement towards any specific performance based funding plan take into account peer reviewed assessments such as we had presented—and not simply rely on the boosterism of individual constituencies, some of whom were paid to advocate for it. Subsequent to our discussion of this memo, additional research and experiments in other states have continued. Other researchers since then have continued to examine PBF trends and results. Michael McLendon and James Hearn do just that in a recent AAUP article, "The Resurgent Interest in Performance-Based Funding for Higher Education," also helping clarify some of the terms employed. They distinguish three flavors of Performance schemes that have evolved over fifty years—and that also informed our Faculty Senate memo included above: 1) Performance Based Reporting, the least directive of the mechanisms, where institutions simply report campus performance on key indicators to the Board and the public, with the view that good data and transparency will drive good decision making. 2) Next in line is Performance Based Budgeting, where Boards consider campus performance on a set of indicators in making budget allocations, but retain flexibility in taking into account all available information. 3) Most directive is Performance Based Funding, where state funding is linked directly, according to specific formulas, to campus performance on selected indicators. Matching the results of our own literature review, McLendon and Hearn report that experiments with these approaches increased dramatically in the 1970s, then fell away at the turn of the millennium after initial results, especially with PBF versions, were mixed. But then various factors in recent years, including a reorganized lobbying campaign from political interests, have helped resuscitate PBF from "the near dead," to where it is now resurgent again in most states.

As is often pointed out, Tennessee was the first state to formalize a PBF mechanism, in 1979-80. Connecticut was next, in 1985, with Missouri following in 1991 and Kentucky in 1992, with 21 states employing some version by 2001. Since 2001 it has remained a dynamic playing field, with various comings and goings, and public political fights, so that as of February 2013, as McLendon and Hearn point out, "the National Conference of State Legislatures counted twelve states with active systems, four in the process of implementing new systems, and nineteen discussing implementation of a new system." South Carolina's approach is often cited as an example of failure, initially attempting to base 100% of appropriations on PBF formulas, which turned out to be more complicated and costly to implement than planners first anticipated, so they dismantled the scheme. Ohio, on the other hand, is currently in the midst of an experiment that is scheduled over time to lead to 100% of appropriations being based chiefly on "course and degree completions."

It may be useful to remember just how funding mechanisms for university systems in general have worked in the past. The long-standing system in place generally for public universities for most states has used some version of funding the public component of higher education (the tax dollar portion, as opposed to tuition) by counting students in the system (ubiquitously and somewhat derogatorily referred to by PBF promoters as the butts in seats method). Thus, at a designated calendar day (the 15th day in the MUS), our institutions would report the total number of students at the institution, using a mathematical Full Time Equivalent (FTE), and be reimbursed accordingly (an amount which in the case of MSU, for example, now accounts for 30% of the cost). The value of this longstanding method lay in its simplicity: when student populations were stable and state funding was stable, university budgets were predictable and planning became easier; moreover, institutions became adept at tracking and predicting enrollments, contributing to financial and program planning. Furthermore, the method incentivized growth (educating more students) and competition for students (itself an incentive for quality), as long as the combination of state funding and tuition together exceeded the cost of educating the student; and the competition for students also incentivized continuing to improve infrastructure, including student amenities (the so called "climbing wall phenomena"—Auburn University just spent $67 million on its new student exercise facility) as well as quality and variety of programs. Institutions, then, created budgets through shared governance planning processes with input from various levels (Administration and Budget Committees) to best serve the various, complex goals of the institution. The innovation of PBF was to single out specific aspects of institutions' missions and tie some percentage of funding directly to moving those needles in the right direction. Innovation is perhaps too strong a word: whether in the business world or even the public sector, creating systems of rewards for meeting institutional goals is as old as capitalism itself; think bonuses for meeting sales goals. What was new for higher education was the trend from Performance Based Reporting or Budgeting to Performance Based Funding, that is, the tying of institutional funding from the top to specific targets.

With this history, nationally and locally in mind, whether one supports PBF or not, a few thoughts on the terminology used in this debate might be in order, and it should be recognized at the outset that the name, Performance Based Funding, itself is not neutral. It has been strategically named by promoters. A more neutral, and accurate, name for most states' experiments, at least those shy of 100%, would be Target Based Partial Funding Incentives (TBPFI—not exactly a catchy acronym, though).

Early on in the rise of PBF experiments nationally, the targets focused on were retention rates and graduation rates. The idea was to improve those by increasing specific incentives for doing so. This is where the strategic naming of PBF comes into play. It wasn't as if there weren't already direct, powerful incentives for increasing retention and enrollment. There were—in tuition and state FTE funding. Neither was it that there weren't, albeit somewhat less direct, incentives for graduating students (via a competition for institutional reputation for quality results). Finally, it wasn't as if there weren't already performance evaluations and rewards throughout the system: faculty throughout the MUS are reviewed annually, with merit pay tied to performance in all aspects of their job: teaching, research, and service. But what PBF dictated was a more interventionist focus (sometimes also called "accountability efforts"—a name also not "neutral"), from the top, on specific, identified targets, hoping to tip the scale of individual campuses' planning processes in the directions desired through funding mechanisms instead of traditional policy directives from Boards, as had been the case in the past. And it allowed for good public relations: communicating to the public that the governing boards were doing something. Thus, PBF schemes were often marketed with accompanying rhetoric that performance measures and allocations of any type had been missing in the past, and these new schemes would finally restore heretofore market reward forces to the university landscape where they had been missing completely—which simply wasn't true.

Additionally, it is useful to note that the rhetoric employed by proponents of PBF, especially in its resurgence since 2000, borrow heavily from the business and corporation worlds: what we should be paying for, it is argued, are outputs not inputs. We should be funding success, not attempts (meaning numbers of degrees, not butts in seats). Promoters borrow from the commercial world that produces products (degree production!), as opposed to offering services (education, basic research, agricultural extension). Deploying free market economically flavored arguments, proponents argue (and I heard one consultant actually say this to the Board of Regents at a meeting) that traditional FTE funding models only incentivize the continual recruitment of students and endless growing of the student populations and never actually graduating them—that based on current incentives, it was in universities' interests to never let any of them out of the place! (The irony of this argument is that while graduation rates are a real concern, public universities have always sustained respectable graduation rates, though with room for improvement; it's actually the new market driven for-profit university sector that has been recruiting, receiving funding, and not graduating students, punctuated by the recent spectacular bankruptcy of Corinthian Colleges.)

PBF schemes and the rhetoric that promotes them focus on few aspects of higher education performance, and are thus, by design, intentionally reductive. Those focusing on graduation rates alone tend to reduce the complex social function of universities as multimodal entities (to use the familiar three-legged-stool terminology of the mission of most of our universities) responsible for teaching, research, and outreach to just one: teaching. And they tend to reduce teaching to degree production, even though, as Derek Bok has explained in his recent book (Higher Education in America, reviewed by the Montana Professor in its Spring 2014 issue), that the great success of the American university system over its entire history has been its tripartite mission: 1) equipping students for careers by providing skills and training; 2) preparing students to become enlightened citizens of our democracy; and 3) preparing students to live full, satisfying lives capable of reflection and self knowledge. Bok warns that government officials, policymakers, and reformers do a disservice when they only speak of the first—as they too often do—and only in the context of increasing our global competitiveness. Part of what has made the American system the envy of the world is embedded in its multiple goals and its determination to make progress on all three goals available to as wide a swath of the population as has ever been attempted. Our own regents, though they obviously all know better, sometimes speak about our university system and all its entities (from our two years to our research universities) as if its sole purpose was job training and workforce development.

The rhetoric used by supporters of PBF also tends to obscure the fact that institutions aren't fully in control of student success. The various factors in student success and graduation rates have been well studied and documented, including adequate academic preparation in the first place, and of course, individual hard work. Another chief factor affecting success is access to financial aid (many states have their own state financial aid funding working in coordination with federal monies; Montana has very little state support for scholarships.). PBF mechanisms have no impact on these extra-institutional factors. There are factors that institutions can and are trying to control—such as improving advising, improving tracking and intervention systems, improved teaching methods, and tutoring. And then there is the gorilla in the room that everyone interested in the future of higher education should care about, and which faculty, at the front lines of the delivery of that education, care about deeply: the issue of quality. It is difficult to get around the fact that PBF regimens that focus on degree production necessarily focus on quantity not quality. But as we all know, both "q's" matter: in fact, keeping both high should be the highest goal of all stakeholders—and it is interesting to remember that an earlier set of similar experiments in the MUS with performance measures did include the "q!" In the 1990s, the experiment was known as PQ&O: or in other words, Productivity, Quality, and Outcomes.).

Given all this, it is interesting to ask of Montana's place in the national PBF debate: why now? Where do we fit in? McLendon and Hearn, in an empirical study, actually identify some interesting patterns across states where PBF programs have been resurgent—patterns which it would behoove Montana to attend to. First, they find that states with the most stable PBF systems are those with the highest involvement of state higher education officials, as opposed to the voices of "legislators, governors, businesspeople, and community leaders." That is, PBF designs are working best where experts in the fields are working on the details instead of politicians running for office. Insulating higher education somewhat from the vicissitudes of two-year election cycles is one reason many states have governing boards in the first place. Second, as should be no surprise, McLendon and Hearn find that partisan politics is a driver of states towards PBF. Third, they find that states with more powerful or centralized Boards have been "less likely to adopt performance-funding policies." I invite readers to decide for themselves just where Montana fits into these trends.

So, where are we now? Since 2010, our MUS has been progressing along the path of developing PBF plans. As the BOR webpage states, "The Montana University System is engaged in the process of exploring and developing performance funding models to be included as an additional component in the allocation methodology for distributing state appropriations to the MUS campuses. The process has been split into two phases, 1) a short-term pilot phase directed at the allocation of funds specifically for FY 2015, and 2) a second phase aimed at developing a performance funding model to be used on a longer-term basis. This second phase will occur during the 2013-14 academic year, whereby the MUS will engage faculty and staff throughout the system in an effort to fully develop a performance funding model. A Performance Funding Steering Committee has been appointed to provide oversight and direction of this process." (More details can be seen by going to the BOR webpage: http://mus.edu/CCM/performancefunding/PerformanceFundingSteeringCommittee.asp

Those processes are still underway. Faculty are participating in various ways. Montana State University held charrettes last year to get input on details. Phase two was scheduled for a BOR vote at the May 2014 meeting. Further votes will follow. The details of these plans matter a great deal. Partly because of the trends already identified, the plans extend far beyond the parameters suggested above in our 2010 Faculty Senate memo above. Whether these plans turn out to be beneficial to the university system to which many of us have devoted a great part of our life's work remains to be seen. It is hoped that the details are rigorously debated, closely examined, and that the plans and decisions are data driven. Our students should expect nothing less.


Work Cited

McLendon, Michael K., and James C. Hearn. "The Resurgent Interest in Performance-Based Funding for Higher Education." Academe November/December 2013. Web. 15 August 2014.

[The Montana Professor 25.1, Fall 2014 <http://mtprof.msun.edu>]


Contents | Home