Cutting through the noise on overfishing, 2005
In 2005, I produced an Overfishing Scorecard that summarized official US government data and produced a simple measure of success for fishery management. The scorecard relied on official determinations of status to produce a single summary number between 0 and 1. Scores could be aggregated to produce % success scores, and regions could be ranked based on % success.
The Overfishing Scorecard was a smashing success, and it was one of my most popular publications. Fishery managers engaged in productive discussions with me about how to improve their scores, including a 30 minute call from Chris Oliver, now head of NOAA Fisheries and in 2005 Executive Director of the North Pacific Fishery Management Council. Within a few days there were over 500 articles published about the scorecard, and many reporters called asking for additional comments on how their regions were doing.
Previously, the only measure of success was a report required by Congress that ran to 20 pages with 11 tables containing 250 rows of data listing the status of 688 fish stocks. Beyond these data tables, 3 pages were required to list the 43 changes in status from 2003 to 2004. Imagine having to wade through those heaps of information to try to figure out how well management was working. The result was that the information didn’t matter, the format was too much of a barrier.
Why was the Overfishing Scorecard successful? I think because it conveyed the big picture of fisheries management success. In contrast, the official report was correct in detail but wrong in substance. The official report was required by Congress, but it wasn’t doing it’s real job: communication. Staff at NOAA Fisheries weren’t actually eager to release the “kick me” report as they came to call it, because of the criticism they received when they disclosed problems.
After struggling to make sense of the report for several years, I resolved to produce something more effective, using official government data and legislated criteria for success. The analysis was almost trivial. To make evaluations independent of scale, I relied on a dimensionless metric between 0 (status of fish bad) and 1 (status of fish meets goals). Some of my colleagues were worried that the goals should be more stringent, or that details about a single fish stock needed to be mentioned in any summary.
But there was a big benefit in using the simple one number metric, it allowed scores to be aggregated. The Scorecard published single % success summary scores for regions and ranked the regions. This summary did something that the official 20-page government report failed to do, it made the results easily accessible to readers. Where is management succeeding and where is it failing? With the Scorecard, anyone could find out.
After the Overfishing Scorecard was released, staff at NOAA Fisheries asked me to attend a meeting at agency headquarters near Washington DC. After being grilled about the scoring system, I learned that the agency had a similar product but it had never been released because management was worried about how it would be received. They wondered if it had been leaked to me and I had used their work. Since I could explain the origin, and our products were a bit different, they came around to believing that it was my own product. Eventually, the agency released their own product, which is still published on their website.