Skip to main content

Understanding the Value of Business Information

  • Chapter
  • First Online:
The Value of Information
  • 998 Accesses

Abstract

Many businesses seem to be of two minds when it comes to understanding the value of their information. Firms say that it is their most valuable asset, yet they seem unwilling to invest in information security technologies to protect it. A closer look at a few different ways to understand the value of information suggests how to resolve this apparent paradox.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

eBook
USD 16.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 119.00
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info
Hardcover Book
USD 109.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Notes

  1. 1.

    A great book was made into an inscrutable movie about this very sort of insight. The text of 2001: A Space Odyssey makes clear that while “Thus Spake Zarathustra” was blaring through the speakers, the alien monolith was implanting in the prehominids a vision of how using rudimentary tools could enable them to eat in safety, away from the danger posed by predators. According to author Arthur C. Clarke, this training helped humans not only to survive but also to evolve habits of mind (envisioning a future where what one is about to decide has become part of history) that have served us well.

  2. 2.

    To oversimplify, the cost of obtaining information consists of direct resource costs plus any harms that mount up because information takes time to develop, and because the resulting delay may have its own consequences.

  3. 3.

    On average, therefore, the total cost of the best a priori decision is roughly $16 million, but it would be incorrect to assume that perfect information is worth nearly as much as the full $16 million. One must remember that even with perfect information, there will be costs. In fact, if we knew the exact value of R, we might be able to get by with Option 1 and incur costs of roughly $6.25 million (if R turned out to be between 0 and 12.5), or we might learn that Option 3 is needed, with associated costs of $22.5 million (if we learned that R was exactly 62.5) or more than $28 million (if we learned that R was 200 or greater). On average, we could decrease our expected costs from $15.9 million to about $8 million if we learned R exactly: that difference is mathematically the same as the expected regret of deciding now, and both are tantamount to the value of perfect information.

  4. 4.

    It is possible, as was mentioned at the conference, that new information can appear to increase uncertainty. I believe that this view is illogical, and I prefer to explain it as “more information can sometimes reveal that there was more uncertainty than you realized at the time: the uncertainty is smaller now, though perhaps larger than the overconfident view you held previously.”

References

  • Adams, J. (1997). Cars, cholera and cows: Virtual risk and the management of uncertainty. Manchester: Manchester Statistical Society.

    Google Scholar 

  • Akerlof, G. A. (1970). The market for ‘lemons’: Quality uncertainty and the market mechanism. Quarterly Journal of Economics, 84(3), 488–500.

    Article  Google Scholar 

  • Bell, D. E. (1982). Regret in decision making under uncertainty. Operations Research, 30(5), 961–981.

    Article  Google Scholar 

  • Bernoulli, D. (1738). Specimen theoriae novae de mensura sortis. Commentarii Academiae Scientiarum Imperialis Petropolitanae, 5, 175–192. (Reprinted in translation as Exposition of a new theory on the measurement of risk. Econometrica, 22, 123–136, 1954)

    Google Scholar 

  • Böhm-Bawerk, E. (1891). The positive theory of capital (William A. Smart, Trans.). London: Macmillan and Co.

    Google Scholar 

  • Breiman, L., et al. (1984). Classification and regression trees. Monterey: Wadsworth.

    Google Scholar 

  • Checkpoint. (2009, October 21). Guide to the TCO of Encryption (Checkpoint Software Technologies) Redwood City, California, available at http://security.networksasia.net/content/guide-tco-encryption accessed July 17, 2012.

  • El-Gamal, M. A., & Grether, D. M. (December 1995). Are people Bayesian? Uncovering behavioral strategies. Journal of the American Statistical Association, 90(432), 1137–1145.

    Article  Google Scholar 

  • Ernst & Young LLP. (2009). Outpacing change: Ernst & Young’s 12 th annual global information security survey at http://www.ey.com/Publication/vwLUAssets/12th_annual_GISS/$FILE/12th_annual_GISS.pdf.

  • Finkel, A. M. (2010, October 20). Out of balance: (Why) are risk assessors more interested in uncertainty and variability than regulatory economists are? Presentation at the Society for Benefit-Cost Analysis annual meeting, Washington, DC.

    Google Scholar 

  • Finkel, A. M. (2011). Solution-focused risk assessment: A proposal for the fusion of environmental analysis and action. Human and Ecological Risk Assessment, 17(4): 754–787. See also five commentaries on this article in the same issue of the journal, 788–812.

    Google Scholar 

  • Finkel, A. M., & Evans, J. S. (1987). Evaluating the benefits of uncertainty reduction in environmental health risk management. Journal of the Air Pollution Control Association, 37(10), 1164–1171.

    Google Scholar 

  • Finkel, A M., Shafir, E., Ferson, S., Harrington, W., et al. (2006). Transferring to regulatory economics the risk-analysis approaches to uncertainty, interindividual variability, and other phenomena (National Science Foundation Grant #0756539). Decision, Risk, and Uncertainty program, (Human and Social Dynamics of Change competition).

    Google Scholar 

  • Johnson, C. (2008, October 15). The global state of information security. CIO Magazine.

    Google Scholar 

  • Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decision under risk. Econometrica, XLVII, 263–291.

    Article  Google Scholar 

  • Knight, F. (1921). Risk, uncertainty and profit. Boston: Hart, Schaffner & Marx/Houghton Mifflin Co.

    Google Scholar 

  • National Research Council. (1983). Risk assessment in the federal government: Managing the process (the “Red Book”). Washington, DC: National Academy Press.

    Google Scholar 

  • National Research Council. (2009). Science and decisions: Advancing risk assessment. Washington, DC: National Academy Press.

    Google Scholar 

  • Nelson, P. (1970). Information and consumer behavior. Journal of Political Economy, 78(2), 311–329.

    Article  Google Scholar 

  • Plunkett, E. (Lord Dunsany). (1935, January 28). Jorkens’ Revenge. The (London) Evening Standard. (Reprinted in The collected Jorkens (Vol. 2). San Francisco: Night Shade Press, 2005)

    Google Scholar 

  • Ponemon, L. (2009, April 22). The cost of a lost laptop. The Ponemon Institute. Michigan: Traverse City

    Google Scholar 

  • Soo Hoo, K. (2002). How much is enough: A risk management approach to computer security. Ph. D. dissertation, Department of Management Science and Engineering, Stanford University, Stanford.

    Google Scholar 

  • Stoneburner, G., Goguen, A., & Feringa, A. (2002, July). Risk management guide for information technology systems (NIST Special Publication 800-30). Gaithersburg: National Institute of Standards and Technology.

    Google Scholar 

  • Strategic Data Management. (2008, April 21). Effective data management: Imperative to align business and IT interests.

    Google Scholar 

  • Sturgeon, W. (2006, January 27). Could your laptop be worth millions? C-NetNews.com http://news.com.com/Could+your+laptop+be+worth+millions/2100-1029_3-6032177.html

  • Sutton, W., & Linn, E. (2004). Where the money was: The memoirs of a bank robber. New York: Broadway.

    Google Scholar 

  • Viscusi, W. K., & Aldy, J. (2003). The value of a statistical life: A critical review of market estimates throughout the world. Journal of Risk and Uncertainty, 27(1), 5–76.

    Article  Google Scholar 

  • von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton: Princeton University Press.

    Google Scholar 

  • Yukota, F., & Thompson, K. (2004). The value of information in environmental health risk management decisions: Past, present, future. Risk Analysis, 24(3), 635–650.

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding authors

Correspondence to Luther Martin or Adam M. Finkel .

Editor information

Editors and Affiliations

3.Commentary: Harvesting the Ripe Fruit: Why Is It so Hard to Be Well-Informed at the Moment of Decision?

3.Commentary: Harvesting the Ripe Fruit: Why Is It so Hard to Be Well-Informed at the Moment of Decision?

3.1.1 3.C.1Introduction

The mindset and the algorithms that enable the systematic appraisal of the value of information (VOI) confer a power that is hard to imagine refusing. And yet, VOI methods remain confusing and underutilized. We devote hundreds of billions of dollars each year to public sector decisions (waging war, protecting health, safety, and the environment, etc.), so the stakes—measured by the potential for wasted costs or harms mistakenly tolerated—are vast. We also spend billions of dollars each year on research ostensibly related to these decisions (i.e., applied research and data collection), and sometimes the tiniest ripple in the realm of information can cause huge waves in the much larger realm of the costs and benefits of decisions. For example, in hindsight it is at least conceivable that a few thousand dollars of additional effort spent resolving the factual controversy over whether Saddam Hussein was trying to obtain uranium from Niger might have changed the decision whether to begin a war that lasted for nearly nine years. In other words, spending on research only a percent or so of what we spend on control may be a foolish way to economize, but even more questionable is our refusal to spend even a percent of our research budget on asking the meta-questions that would optimize the value of that research.

The tools of VOI analysis can help us decide how much we need to know before we should feel ready to make a decision, and can even help channel our efforts toward or away from specific subsets of information collection, yet they are barely used where they are most needed. This essay responds to Luther Martin’s chapter about the value of information in data security and then tries to explain why more general concepts of VOI remain curiosities rather than centerpieces, from the perspective of a former federal agency senior executive who has tried to evangelize about VOI methods to environmental, health, and safety agencies (especially the U.S. Environmental Protection Agency) over the past 25 years.

3.1.2 3.C.2Further Thoughts on the Value of Business Information

Luther Martin uses his deep knowledge about computer security and the slow adoption of inexpensive safeguards by businesses to make some excellent points about how risk analysis can shed light on the value of protecting data. Martin essentially takes a revealed-preference approach to estimating the value of safeguarding laptop data: according to this approach, the demand for encryption software should be a function of the losses incurred if a user who can exploit the information acquires it, multiplied by the probability of this untoward event. However, Martin might have considered one or more of these three refinements to this basic (probability × consequence) approach to estimating the value of preventing a loss:

  • Risk attitude. The value an individual places on protecting an asset may, of course, be either smaller or larger than the expected monetary consequences of the threat (because of a nonlinear relationship between value and utility), or not fully captured by expected utility (see the literature on decision regret, prospect theory, and other refinements of expected utility, including Bell (1982) and Kahneman and Tversky (1979)).

  • Interindividual variability in preference. It is possible that the subpopulation of customers who do buy laptop insurance or encryption software are precisely the users who place a higher relative value on their own data.

  • Uncertainty in risk. The point estimate of 0.0024 for the probability that someone who finds a laptop can exploit valuable data therein seems to imply randomness where non-randomness applies: where valuable business data are known to exist, thieves may not target victims at random, and someone who finds a lost laptop and doesn’t know how to exploit its data may be able to find someone who does value the data highly and sell the laptop to him.

Nevertheless, his chapter shows that valuation—a concept that many readers of this collection may think of only in terms of willingness to pay for intangible benefits to longevity, quality of life, or the environment—is applicable to intangible market commodities as well. That the demand for tools that protect stolen data from being used against the victim is weaker than the purveyors of the tools would prefer is a familiar story to regulatory agencies, who often struggle to mobilize public support for protective measures or to catalyze public willingness to take self-protective steps.

3.1.3 3.C.3Two Kinds of Value of Information

In his opening remarks at the Resources for the Future workshop, Lawrence Friedl recited two lists of technical terms, each used by a particular discipline, to show compellingly that collaboration between (say) geoscientists and economists is made more vexing by the lack of jargon in common. I think Martin’s chapter shows in addition that some terms that appear to be common to multiple disciplines may be an even bigger impediment to interdisciplinary collaboration because disciplinary specialists believe they use the term the same way as those in other fields do. The kind of “value of information” I work with and the kind Martin writes about here have much in common: they both start from concern about losing something of value. But in data security, the information itself is the commodity that we value (in the sense of “are afraid to lose”), whereas in more general decision theory, information is something we ascribe value to—it is a means to avoid losing something else of value. In the latter context, information—perhaps more precisely called research—has value (in the sense of “efficacy”) because armed with it, we can get more of what we really value.

So in Martin’s example, the value of information is akin to the value of life in the kind of regulatory decision problem I will sketch out below. By putting a value on laptop data, we can help determine which of the decisions we could make would enable us to best protect the data, in light of the increasing cost of achieving more assurance. But the choice of how assiduously to protect data, like every other important and nontrivial decision each of us will ever make, is complicated by uncertainty. Martin could have extended his chapter, therefore, to ask some value-of-research questions, all flowing from the idea that we might seek information to better protect our data (our information). How uncertain is the assessed probability of the data falling into the hands of someone who could use the data to harm me? How uncertain is the loss I would incur in this eventuality? What could I learn that would reduce my uncertainty about these parameters, and how much would it cost me to learn more? These are the raw materials for assessing the value of information—whether it will be harnessed to protect lives, ecosystems, corporate profits, or in this somewhat confusing mix of two different usages of the same word, to protect other information.

3.1.4 3.C.4The Classic VOI Setup for Risk Regulatory Decisions

Information has value only insofar as it reduces potential losses that follow from suboptimal decisions (Finkel and Evans 1987; Yukota and Thompson 2004). That bold statement already excludes some of the most important aspects of how we colloquially treat information—in the immortal words of the Faber College motto in the 1978 movie Animal House, “Knowledge is Good,” after all—but the tight link between the performance of decisions and the salutary power of information is what enables quantitative estimates of VOI and ordinal comparisons among possible research strategies. To set up a VOI inquiry, therefore, the involved protagonists have to be willing to answer certain preliminary questions (here I pose them generally, but they are also specific to the kind of regulatory cost-benefit examples I work in):

  • What are we trying to achieve? (In environmental, health, and safety regulation, to reduce risk net of the cost of control, otherwise known as “maximize net benefit”).

  • What choices do we have? (Here, either do nothing, or implement one or more control options whose costs and benefits can be estimated).

  • What don’t we already know perfectly? (Although I have written extensively about inattention to uncertainty in regulatory cost (Finkel et al. 2006; Finkel 2010), assume for simplicity here that only the risk is uncertain).

  • Which option would outperform all others, for each possible value of the uncertain quantity? (If we knew exactly how large the risk was, how tightly would we control it to avoid errors of overspending and underspending?)

Those questions set the stage for the “VOI question.” The real power of this method is that it encourages those involved to try a leap of insight—to imagine that they have already made a decision and can look back with pride or regret on what they did or might have done.Footnote 1 The VOI question therefore is: How much do we stand to lose if we decide now and later come to wish we had chosen otherwise?” The fundamental assertion of VOI analysis is that perfect information is worth exactly as much as the expected losses we stand to incur by doing the best we can now, within the shadow of uncertainty. This leads directly to the fundamental corollary: information that costsFootnote 2 less than it is worth should be pursued, while information that costs more than the benefits it delivers should be shunned.

The following example, adapted from my 1987 paper with John Evans, shows the relationship among choices, uncertainty, and information value. Assume that we face an uncertain risk to human health that, if left uncontrolled, will kill R people every year, and assume that we value a statistical life at $1 million (this estimate was less appallingly low when we developed this example for illustrative purposes 25 years ago). The agency charged with regulating the risk has three possible choices: (1) do nothing; (2) require polluters to spend a total of $10 million every year on controls that will reduce the risk by 80%; or (3) require polluters to spend $20 million per year on more efficient controls that will reduce the risk by 96%.

The total cost (TC) of each option, the control costs plus the monetized health harms left behind, is a function of only one unknown (R), and the values of TC (in $million) for each decision option are (1) R; (2) 10 + 0.2R; and (3) 20 + 0.04R. Simple algebra shows that for R < 12.5, TC is least when Option 1 is chosen, and that for R > 62.5, TC is least when Option 3 is chosen; for any intermediate value of R, Option 2 has the least cost. Figure 3.1 shows the TC of each option; the dotted line demarcates the least-cost frontier as a function of R.

Fig. 3.1
figure 1

The total cost (control costs plus the monetized harms of risks not controlled) of 3 hypothetical decision options, as a function of the uncertain baseline risk

Now assume that R is uncertain—because if it isn’t, we already know what to do and no additional information is germane, nor worth anything to obtain. Suppose R has an expected value of 29.6 but is lognormally distributed (about a median value of exactly 4) with a logarithmic standard deviation of 2 (i.e., the natural log of R is normally distributed with a standard deviation of 2). In this case, it turns out there is about a 72% chance that R is less than 12.5, about a 21% chance it is between 12.5 and 62.5, and about a 7% chance R exceeds 62.5. But again, on average R is 29.6, and the expected cost of Option 2 is still less than that of either of the other two choices.

So if we have to live with the uncertainty, Option 2 is the best we can do. But with 79% probability (72 + 7), we will someday look back at that choice with regret: if we overestimated R, we could have saved $10 million per year (imposed no controls) and accepted a small amount of risk, whereas if we understated R, we could have spent $10 million per year more and reduced a very large risk more thoroughly (thereby saving lives worth more in total than $10 million per year). Figure 3.2 shows the regret of choosing Option 2 as a function of what we might learn R to actually be; superimposed on Fig. 3.2 is the (lognormal) uncertainty in R that we might choose to eliminate with more information. The regret of choosing Option 2 when Option 1 was wiser follows the line (10 − 0.8 R); the regret of choosing Option 2 when Option 3 was wiser follows the line (0.16 R − 10). By integrating the expressions

$$ \int\limits_0^{12.5} {(10 - 0.8R)f(R)\;dR} \quad {\hbox{and}}\quad \int\limits_{62.5}^\infty {(0.16R - 10)f(R)\,dR}, $$

where f(R) denotes the probability density function for the uncertain risk, one can calculate the expected regret of choosing Option 2 without gathering more information; in this example, it amounts to roughly $8 million per year. So VOI theory dictates that perfect knowledge about the exact value of R, which would allow us to choose an option with perfect confidence that it was the best available one, is worth about $8 million (per year, or converted to net present value using an appropriate discount rate).

Fig. 3.2
figure 2

The additional cost of choosing Option 2 as compared to Option 1 (the red line) or as compared to Option 3 (the blue line), as a function of the uncertain baseline risk

Note that this sum is fairly large relative to the general stakes of this decision. On average, we expect to spend $10 million and incur (29.6 × 0.2) = $5.9 million in cost attributable to “lives not saved,” so is worth roughly half this $15.9 million to eliminate the uncertainty.Footnote 3 But this ratio is large because the uncertainty in R is quite large, and because the best a priori decision is superior only 21% of the time. When the best decision is this precarious, knowledge is more than “good,” it is valuable. But by the other side of the same coin, when uncertainty is small and/or when it would take a large misestimation to make a different decision better than the one about to be chosen, knowledge could have little extra value, and quixotic attempts to obtain it may cost much more than they help.

The example above may help elucidate some practical rules-of-thumb about VOI:

  • To a rough first approximation, bigger decisions justify more extensive research, as do larger uncertainties. Obviously, it makes little sense to buy a $5 racing magazine to help decide which horse to place a $1 bet on.

  • The converse of this advice, however, is more important: not all big decisions justify extensive research. The lesson of Fig. 3.1 (although lessons are made to be challenged) is that once we’ve learned enough to be very (completely) confident that R must lie between 12.5 and 62.5 deaths per year, further information has little (zero) value. With one important caveat (see the last paragraph of this essay), if nothing you can learn can make you want to change your mind, it’s time to stop dithering and act.

  • When it’s clear we do need to know more, VOI theory says that not all uncertainty reductions have equal value, and that small targeted reductions can be much more useful than large reductions achieved by brute force. Although this is easier said than done, the goal of reducing uncertainty in this context should be to end up with an uncertainty distribution completely contained within one of the regions in a schematic like Fig. 3.1, where one particular decision dominates all others. In practice, this sometimes means focusing attention on one or both tails of the current uncertainty distribution; if you can rule out the tails, you can “rule in” the best course of action. In the general case where uncertainty stems from several separable components, one can simulate the results of a research investigation before conducting it, to see what effects it would have on the tails (or on any part of the uncertainty distribution that straddles more than one region where a particular decision is optimal). In human health risk assessment, both the extent of exposure to the stressor and its potency (the probability of harm per unit of exposure) are always uncertain—so it is always possible to envision what the uncertainty distribution would look like after resources were expended to obtain N more environmental samples, or instead to conduct dose-response experiments on N more laboratory animals or epidemiologic investigations on N more exposed humans.Footnote 4

3.1.5 3.C.5Whence the Resistance?

As a naive graduate student in the mid-1980s, I went with my mentor to several program offices of the U.S. Environmental Protection Agency (EPA), full of enthusiasm for a set of tools that could shed light on how much research is too much, but especially on how pound-foolish it is to make billion-dollar decisions with million-dollar (or smaller) research programs behind them. Between our salesmanship and the agency’s receptivity, little was accomplished. More than 20 years later, I found myself on the Board of Scientific Counselors advising EPA’s Office of Research and Development on how it could develop strategic plans for environmental research in support of EPA’s program offices, and I found that VOI thinking had advanced scarcely at all in the intervening decades. I offer several reasons for the slow adoption of VOI methods.

First, agencies are often risk averse and populated by risk-averse individuals. A tool and a mindset that could reveal the general need for substantial increases in applied research—but could also suggest specific instances where additional research would be superfluous—may be seen as a mixed blessing at best.

Second, because VOI is in essence the value of uncertainty reduction, it presupposes the willingness and the capability to estimate how uncertain are the key parameters (risk, cost, efficiency of controls, etc.). Agencies may be resisting this step rather than the VOI mindset per se. However, I think this becoming a less likely explanation, as EPA and the other agencies have made tremendous strides in making quantitative uncertainty analysis of risk routine and advancing new methods for it (see, e.g., NRC 2009), although without commensurate attention to uncertainty in cost, these advances may promise more than they can deliver.

Resistance to VOI may also be a symptom of resistance to more general methods of quantitative decision analysis. In my experience, people elected or appointed to positions of decisionmaking responsibility sometimes believe, overtly or tacitly, that they must be good decisionmakers—that their innate skill (or their well-developed gut feelings) surpasses any formal method.

Moreover, since the organizational goal of VOI analysis is to harness research plans to improve decisionmaking, well-intentioned research managers may believe they have already made that leap when they take a baby step towards it without having used any VOI methods. I have seen several research programs highlight the fact that they are now beginning to link their research agenda to “serve the needs of decisionmakers”—but by this they often mean that they ask the program offices for clues as to what problems are most important to them and try to focus more of their research efforts on the A-list problems. This is assuredly an improvement over any less interactive method of setting research priorities, but of course it never considers decision regret (the sine qua non of quantitative valuation of information), simply because no decisions are ever mentioned. Just as big uncertainties do not necessarily imply valuable research, big problems are an even less reliable indicator of critical knowledge gaps. Big problems with clearly optimal (even if costly) solutions don’t demand extensive research, nor do big problems with intractable uncertainties. But any dialogue across the research-program divide may tend to foment the sense that all the desired conceptual linkages have also been forged.

That leads to perhaps the most fundamental problem of all. EPA and many other agencies operate under a linear research-analysis-decision paradigm, probably first codified in the landmark 1983 National Academy of Sciences report Risk Assessment in the Federal Government: Managing the Process (“The Red Book”), in which little thought is given to solutions until the problems are analyzed ad nauseam. Statutory design sometimes dictates such a process: for example, Congress has told EPA to refine its estimates of the risks of criteria air pollutants every 5 years, but the National Ambient Air Quality Standards that EPA sets are aspirational only and dictate no specific actions of any kind. In other situations, EPA chooses to study individual substances and set emissions or concentration standards, rather than to compare any actual controls—and yet arguably, the nation does not have a “dioxin problem” but a series of product and technological choices that each contribute to an unacceptable total dioxin load in the environment. The dogma that risk assessment must precede and inform risk management is actually diametrically counter to decision theory, which starts from the premise that assessment exists to help discriminate among choices, not to exhaust itself and only then pass forward (incomplete) understanding to those responsible for thinking about solutions. If information has value only insofar as it sheds light on choices, and no one thinks hard about choices until too late, then all the resources previously devoted to information collection will have been aimless, and the urgency of doing anything at all, rather than “calling for further study,” may be irresistible.

So in part out of concern for a process that does not harness research to reduce decision regret, but more out of a larger concern that we are becoming too good at doing cost-benefit analysis and yet are not solving the problems we study, I have proposed that we consider a new policy paradigm I’ve termed solution-focused risk assessment (SFRA) (Finkel 2011; NRC 2009, Chapter 8). By asserting that cost-benefit analysis should not begin in earnest until after agencies and their affected stakeholders have given some concrete thought to solutions to be analyzed, SFRA would also provide a template for VOI methods to flourish. Perhaps even more significantly, it could enable the beginnings of a feedback from the study of problems to the study of solutions. One conundrum of VOI theory is that more and better choices can sometimes increase the value of information—you may not regret flipping a coin and picking one of two lousy choices, until someone suggests a third alternative for which better information could truly be a life-saver. A new relationship between analysis and action that encourages the analyst to say, “This research would help you make a better choice, but here’s another choice that might be even better,” is, in my view, the true validation of the value of wisdom, of which VOI is the price of admission.

Rights and permissions

Reprints and permissions

Copyright information

© 2012 Springer Science+Business Media Dordrecht

About this chapter

Cite this chapter

Martin, L. (2012). Understanding the Value of Business Information. In: Laxminarayan, R., Macauley, M. (eds) The Value of Information. Springer, Dordrecht. https://doi.org/10.1007/978-94-007-4839-2_3

Download citation

Publish with us

Policies and ethics