One can find many interesting lessons for business and innovation in case studies from ongoing experiments in public education. For example, the Summer 2010 edition of American Educator illustrates a lesson we teach in Conquering Innovation Fatigue: metrics to drive performance can have unintended consequences that may hurt rather than help. Indeed, unintended consequences are a major theme of the book, as we consider the problems arising from metrics, corporate and government policies, innovation initiatives, laws, taxation policies, and other factors, all of which can contribute to what we call innovation fatigue.
In terms of education and the danger of improper metrics, Linda Perlstein’s article, “Unintended Consequences; High Stakes Can Result in Low Standards,” examines a highly celebrated school in Annapolis, Maryland that received media attention and praise for seemingly miraculous success in education. The new principal arrived in 2000 to find Tyler Heights Elementary School in a dismal state with only 17% of its students getting satisfactory scores on the state test. She began redirecting efforts in the school to address this problem. Eventually her laser-focus efforts paid off, delivering the stunning success of 90% of third-graders performing well on the Maryland State Assessment, when only 35% of third-graders did so two years before. Several newspapers recognized the amazing turn-around and people at the school celebrated the success. But was it real success?
To achieve good performance on the Maryland State Assessment, education for the children was largely focused on how to do well on the test. Students learned how to write BCR’s (“Brief Constructed Response”) to deal with expected questions about poems and plays, and practiced writing these short answers for many hours, without actually studying poems or plays. “What gets tested is what gets taught,” the principal told the teachers, even if that meant leaving behind the material that was supposed to be taught according to state standards. Bins of equipment for studying science were largely unused.
Tyler Heights’ third-graders got only the most cursory introduction to economics and Native Americans, and much of the curriculum was skipped altogether. The students were geographically ignorant. . . . The third-graders had heard Africa mentioned a lot but were not sure if it was a city, country, or state. (They never suggested “continent.”) At the end of the year, the children in Johnson’s class were asked to name all the states they could. Cyrus knew the most: three. He couldn’t name any countries, though, and when asked about cities, he thrust his finger in the air triumphantly. “Howard County!”
The state standards required a broad curriculum, but the metrics for assessing that were based on one particular test and all the incentives were for helping students pass that test. In spite of the praise for the miracle at Tyler Heights, had the children really been helped?
The Campbell Effect
The problem with unintended consequences from metrics such as tests is hardly unique to Tyler Heights. Daniel Koretz, also writing in the same issue of American Educator (see page 3 of the PDF file on unintended consequences), explains that in education and other fields, score inflation is a common and well known but widely overlooked problem. In the social sciences, a phenomenon that leads to score inflation is known as Campbell’s Law. While widely applied to education, it was developed while looking at business. Donald Campbell, a prominent social scientist, examined the role of corporate incentives on the performance of employees. His research led to this general formulation: “The more any quantitative social indicator is used for social decision making, the more subject it will be to corruption pressures and the more apt it will be to distort and corrupt the social processes it is intended to monitor.” (Donald T. Campbell, “Assessing the Impact of Planned Social Change,” in Social Research and Public Policies: The Dartmouth/OECD Conference, ed. Gene M. Lyons, Hanover, NH: Public Affairs Center, Dartmouth College, 1975, p. 35. See also Can New York Clean Up the Testing Mess? by Sol Stern.)
Campbell’s Law is at work when schools game tests to get better scores, at the expense of education. It is at work when cardiologists choose not to operate on patients who might need surgery rather than risk hurting their own published statistics on mortality rates among their patients (Koretz refers to a 2005 story from the New York Times reporting the shocking results of a survey of cardiologists). It is at work when a company tries to boost innovation with metrics or incentives that result in game playing, while leaving the real problems from culture, systems, and vision unaddressed.
In our experience, metrics and incentives can play a valuable role in driving innovation, but only when the corporation has a culture that genuinely encourages innovation, when there is a shared vision of innovation and success, and when sound systems are in place to advance innovation. Without those, you can not only waste a lot of resources in attempting to drive innovation with metrics and incentives, you can actually make a weak culture become pathological and lethal, sometimes exacerbating fatigue factors like the Not Invented Here syndrome, theft of credit for innovation, and breaking the will to share. Adding incentives linked to metrics without the right culture and systems can be sort of like throwing raw meat into a school of sharks or piranhas. You can generate a lot of activity, a lot of exciting thrashing and splashing, but in the end there will just be a lot of blood in the water and fewer thinkers and producers in your school.
As always, innovation success requires that you carefully monitor for harmful unintended consequences from the policies, programs, and incentives you have in place. Innovation metrics, incentives of all kinds, and employee performance evaluation systems and other tools associated with metrics can backfire. Unless you are tuned to the voice of the innovator and understand the impact of unintended consequences, you can be like the company we treat in Chapter 8 of our book that felt like it was a rock star of innovation while they were actually squelching it. Don’t let the unintended consequences of well-intended policies and metrics crush your innovation success.
Let Innovationedge Strengthen Your Approach to Innovation
With our experience at Innovationedge, we are prepared to evaluate your culture and innovation-related systems to help you strengthen your innovation capabilities and create greater ROI. Not happy with the innovation performance you’ve seen? Not sure you are measuring it correctly? Worried about the unintended consequences that your incentives might have? Give us a call and let us help you diagnose your state and provide a roadmap for future innovation success.