I had no idea what to do with my life when I graduated from college so many years ago. After some good and not so good jobs, I decided to become a college professor. I considered and quickly rejected high school teaching. I knew that high school teaching could be an incredible and wonderful experience. It was the teachers at one excellent public high school in Brooklyn that kept me from being one more Latino dropout. As good as high school had been for me, though, I also found it lacking. I did not know then that it was the desire to do research that drew me to college teaching.
Unfortunately, the role of research in college is increasingly under attack today. College administrators, even in predominantly “teaching” institutions, demand that faculty produce more research and publications. But they have also contaminated the environment for research. Research is particularly threatened by, among other things, the hot, new national obsession called “outcomes assessment” or OA.
College administrators and outsiders dismiss faculty resistance to OA. We appear to administrators as simply uncooperative and selfish. In reality, we have a real and justified anxiety about OA. Most of us believe that outcomes assessment will undermine if not destroy what Harvard President Drew Gilpin Faust once called the transformative, yet “creative and unruly” process that is university learning and teaching.
If outcomes assessment meant only that we discuss, decide, and implement a more explicit plan of what we hope to accomplish in our courses and with our curriculum, there would be few complaints. Unfortunately, outcomes assessment imposes a very large cost. That cost includes the following:
- A blind and unfounded faith in “evidence based” analysis
- A distortion of the nature of learning.
- The loss of valuable research and teaching prep time
- A refusal to acknowledge the failure of OA in other countries
Ambiguity of Outcomes
Most people assume that empirical data is always more useful than intuitive knowledge. It’s not, especially when we do not have “good” data going in, “good” coming out, and unambiguous results. The reality is we have none of those things with OA. There are no measures of learning or outcomes that can avoid messy interpretation.
One does not have to be a post-modernist to accept the deeply relativist nature of all social science. Peter Winch raised these unresolved questions about the inherently interpretative nature of social research as far back as 1958. Data or statistical findings, Winch argued, “are not the ultimate court of appeal for the validity of sociological interpretations.” OA would have us believe otherwise.
Even economists, supposedly the most “scientific” social sciences, fail to unambiguously predict economic reality. They could not predict the Great 2008 Recession. As Noble Prize winner Paul Krugman explained “the economics profession went astray because economists, as a group, mistook beauty, clad in impressive-looking mathematics, for truth.”
Management studies have found that employee performance reviews, which are closer in form to learning assessment, are infused with subjective bias and ambiguous. Thus, UCLA Management Professor Samuel A. Culbert argues that these ubiquitous workplace attempts to evaluate employees are actually “subjective evaluations that measure how “comfortable” a boss is with an employee, not how much an employee contributes to overall results.” Management evaluation research, thus, cannot overcome subjective bias despite having easily measured goals like productivity and profit.
Obviously, social research is not pointless. It generates some truths. Those truths require, however, the recognition that data has many different forms. The key to squeezing what is usually a temporary small sliver of truth out of our peculiar professional obsession is to rely on peer interpretation and debate. Measurement and data by itself offer no panacea.
Some supporters of OA, to their credit, have a sense of this. Thus, they seek to create a “culture of assessment” wherein academic departments and divisions would devote considerable effort and time to continuous discussion and debate over their findings, working to improve curriculum and produce better learning. The payoff from such debate will not be worth the effort, however. The problem goes beyond the ambiguity of results. The biggest problem is that OA purposely examines only a small portion of the learning environment. The best illustration is the sports world.
Athletic competition produces clear outcomes and most of these outcomes depend on the learning of student athletes. Or does it? Is it fair to assume that a team or athlete failed to learn if they do not perform well in competition? It may. Has the coach failed them? Perhaps. In many cases, it may also be true that the coach has done an excellent job, employed productive and well conceived methods and techniques to train the athletes.
Putting aside the intrinsic differences between athletes, the reality is that athletes and teams can fail for many other, hard to pin down reasons for which the coach is not directly responsible. They can fail because they did not give their best effort, because they were distracted, because they were too preoccupied with personal problems, because they got anxious and “choked,” etc.
Football coach Bill Parcells understood that coaches are often unfairly blamed for failure. He always asserted that since coaches are asked “to cook the dinner at least they should let you shop for the groceries,” meaning choose who to coach. Any fair assessment of coaching and learning must recognize the roles played by students and by external factors. Astute observations and intuition are the usual methods. Anything more sophisticated is not worth the effort. What happens on the playing field or the classroom is just too complicated and subject to multiple influences.
Distortion of Learning
Learning is not just about what a student got about the course material but how the material got them. The best outcome is when an instructor sparks students to pursue knowledge themselves, exposes students to a world they did not imagine existed, helps them to use their eyes in different ways, exposes the deep ambiguity of that world, and yet sets them on a quest to change the world. Doubt, in this sense, is as important an outcome as knowing.
If all that we aimed for were teaching what we already know, we would do our students and ourselves a grave disservice. The wise, Socrates once said, are those who know how little they know. Wisdom often means asking the right questions rather than thinking you have the “right” answers. But you can’t assess questions. OA threatens, then, to marginalize our primary responsibility as teachers – to invite and guide students into the unknown.
It’s difficult to know when we ever reach those goals, no matter what new assessment tool can be dreamed to measure it. At best, the something extra that we seek to impart in our teaching involves our ability to get students to join us on a journey into the unknown. That journey is really about research.
Students embrace and take a journey into the unknown at different stages of their college career. It’s hard to predict when it is that students begin to “own” their thinking, when they awaken to the idea that they can generate rather than simply replicate existing knowledge. But we do know that this does not happen without some contribution from faculty committed to lighting their fire and who are in a good position, because of their own research quest, to provide that fire.
The Threat to Research and Learning
Something has to give when heavy demands are placed on faculty to develop an “outcomes assessment culture.” That something is the time to do research. It has already happened to pre-college teachers. They complain that the No Child Left Behind law’s exceedingly large demand on test scores has created a situation where the extra time necessary to prepare students for tests reduces the time available to teach. One high school teacher stated “I have so many state standards I have to teach concept-wise, it takes time away from what I find most valuable, which is to have them inquire about the world.”
OA at the college level has not yet imposed test scores as a measure of student learning. But it is already clear that the focus on “learning outcomes” and the time necessary to fulfill demands for measurements, rubrics, reports, loop backs, and curricular adjustments will consume more and more faculty time. Studies show that over 70 percent of college faculty already work more than 40 hour a week, with a large portion working more than 70 hours. OA will consume more of the little time the faculty has to conduct research.
Failure of Outcomes Assessment Practice
In places as far a field as South Africa and Australia, OA or Outcomes Based Education (OBE) has proven to be a controversial if not disastrous educational reform. OA proponents here, however, have not learned anything from this dreadful foreign experience.
New Zealand dropped its OBE approach to education in 2007 after ten years of implementation. OBE in New Zealand had promised but not delivered “a brave new world.” Papua New Guinea also experimented with OBE, primarily because of Australian influence. Resistance soon emerged there too, primarily from educators who recognized very early that OA was a “dismal strategy.”
Australia tried OBE for 10 years in the pre-college levels. Western Australia, in particular, encountered tremendous resistance from teachers, students, and parents leading to an educational “meltdown.” Teachers, especially, found themselves “drowning under a deluge of convoluted documentation.” Another commentator noted that, for Australian teachers, “OBE suffers from assessment overload.” As a result, the implementation of OBE actually “divided the educational community and destabilized education in Western Australia for well over a decade.”
OA also has several historical antecedents. These efforts also failed. The OA focus on efficiency and rational outcomes can be traced to the discredited time motion studies of Frederick Taylor. Beginning in 1913, Taylor sought to eliminate wasted motions in the production process in hopes of increasing profit and presumably wages. Taylorism did not just resemble OA, this method was directly applied to education in an attempt to mechanize and routinize teaching. It was abandoned after almost two decades of wasted efforts and resources.
In the 1960s, behavioral methods were applied to education in an attempt to establish definitive behavioral objectives in the classroom. Commentators pointed out that teachers and schools attempting to comply with the hundreds of behavioral objectives in the classroom found themselves “bogged down with such a load.” Behavioral methods proved tremendously impractical, wasteful, and obtuse.
Why are college administrators so interested in force-feeding the faculty a radical new method that has not succeeded and that has actually proven disastrous elsewhere? Is it that college administrators cannot resist the corporatization of the university, driven perhaps by business minded trustees? Outcomes assessment may be, in that sense, just a modern manifestation of Taylorism, an attempt to micro-manage the faculty on what is being increasingly defined as an educational assembly line.
But even if administrators have a simple well-intentioned interest in reform, their investment in OA demonstrates a vast misunderstanding of the mission and core values of the university. Whether they are pushed by misinformed outsiders and trustees or are self-motivated, college administrators have not defended what they were hired to protect, manage, and expand.
The university is an ivory tower, an odd, yet enormously fertile place. Though the university may appear aloof, it is not just a smug, self-indulgent place. It is productive and creative. The open and pure pursuit of knowledge in the university has led, according to most experts, to the generation of “more world-changing ideas than the competitive sphere of the marketplace.” Progress and development originates in the open and horizontal structures found in universities rather than in businesses. But OA threatens to tear apart this non-commoditized source of economic, cultural, and social innovation.
The university’s open and unstructured culture is a virtue rather than a shortcoming. It should be enhanced rather than overturned. The Harvard President and Historian, Drew Gilpin Faust understands this. She explained the university as a place where the “search for meaning is a never-ending quest that is always interpreting, always interrupting and redefining the status quo, always looking, never content with what is found.”
The university’s core is unstructured, unsettled, dedicated to an open-ended quest for knowledge and meaning, enliven by doubt as much as fact, and committed to teaching others to take similar uncharted journeys into the unknown. It does not fit OA’s measurements, rubrics, and standards. Nor does OA fit the university. Taken to it’s logical conclusion, OA will erode knowledge and learning by shrinking faculty’s ability to do research. What we need is a culture of learning, better yet, a culture of inquiry. But inquiry, lying at the core of the university mission is, ironically and tragically, what does not fit into assessment rubrics.