The cost of errors in prioritising projects

David J. Pannell

School of Agricultural and Resource Economics, University of Western Australia

Abstract

It is common for decision makers responsible for the allocation of public funds to projects to use scoring metrics for project assessment that are not consistent with economic theory. As a result, there is often a loss of benefits due to poor prioritisation of projects. The magnitudes of these losses are estimated for several commonly used scoring metrics. The study examines cases where weighted additive scoring is used inappropriately, where crucial variables are omitted from the assessment, and where project costs are not considered. In addition, the cost due to errors in data is assessed. The cost of errors in prioritising projects is greater in cases where the budget is relatively small, allowing only a small proportion of projects to be funded. Even if data is perfectly accurate, the cost of using poor assessment metrics is very high, typically 30 to 60 per cent of the potential benefits (compared with 50 to 80 per cent for random, completely uninformed project selection). The cost of inaccuracy in the data used is much lower.

1. Introduction

Decision makers responsible for the allocation of public funds among competing projects face the issue of how to prioritise the alternative projects. Many economists, and organisations such as treasury departments, advocate that these decision makers should use economic analysis to estimate the benefits and costs for each of the projects, including non-market values if relevant, and select the combination of projects that delivers the greatest net benefits overall. In practice, many decision makers and decision making bodies use approaches that are not consistent with this. Indeed, in many areas of government, use of economic analysis to estimate the benefits of competing projects is rare or even non-existent. For example, in the area of environmental policy there have been many published examples of analyses attempting to introduce greater economic rigour to project prioritisation (e.g., Peterson et al. 1994; Stoneham et al. 2003; Schilizzi and Latacz-Lohmann 2007; Wilson et al. 2007; Connor et al. 2008), but in practice many environmental programs do not use economic methods in prioritisation (Pannell and Ridley 2008; Hajkowicz 2009; Possingham 2009).

Possible reasons for this include: ignorance of economic methods, a shortage of economists in the organisation, an antipathy to economic methods, excessive transaction costs of conducting economic analysis for large numbers of projects, enthusiasm for an alternative prioritisation method, lack of information that would be required to complete an economic analysis, and lack of any formal requirement for economic analysis (Possingham 2009; Laycock et al. 2009).

Even where decision makers do not use economic analysis, where alternative projects are being considered for investment, it is common for a metric of some sort to be used to rank projects. The ranking achieved from the metric used may be inconsistent with the ranking that would be obtained from a rigorous economic analysis. To the extent that it is inconsistent, there will be a cost to society as the projects selected for funding are not those that would generate the greatest total net benefits.

This paper is an exploration of the costs of using sub-optimal prioritisation metrics. Four potential sources of inaccuracy are assessed: the use of a weighted additive scoring metric when a multiplicative metric should be used; the omission of crucial variables from the metric; the failure to adequately consider project costs; and errors in the estimation of variables. Each of these sources of inaccuracy is prevalent in real-world prioritisation processes. The purpose of the study is to assess how serious the different issues are likely to be. It may be that some of these problems make little difference to the prioritisation of projects, while others make a large difference and warrant efforts to ensure that appropriate methods are used.

2. Indices for prioritisation

Assume that an investor must allocate a fixed budget among a set of projects. Each project has a known cost and known levels of variables related to benefits. The total cost of all projects would exceed the budget, so prioritisation is essential. The benefits and costs of each project are independent – they do not depend on which other projects are selected. To maximise net benefits, the investors must choose those projects with the highest benefit: cost ratios (p = B/C), up to the point where the budget is exhausted.

Suppose that the benefits of a project depend on four key variables: a, b, g and d. A key question is, how should these variables be combined to calculated benefits? One possibility is that they should be combined multiplicatively, so that the benefit: cost ratio would be calculated as follows.

p = (a * b * g * d)/C (1)

This multiplicative approach is appropriate whenever benefits are proportional to a variable. For example, the following variables (which would be relevant to different types of programs/projects) are all factors that may need to be included in the project assessment metric in a multiplicative way because they have a proportional relationship to benefits. [In some cases, their relationship to benefits may be non-linear to some extent (e.g. perhaps adoption by landholders). Depending on the degree of non-linearity, this may need to be factored into the assessment metric.]

• The area over which benefits would be generated

• The average benefit per unit area

• The number of businesses or individuals who would benefit

• The average benefit per business or individual

• The improvement in environmental values resulting from an environmental project

• The number of landholders who would adopt improved environmental works

• The probability of project success (1 – probability of failure due to factors such as climate, politics or technical failure)

In practice, the benefits of projects are very commonly composed of these sorts of variables, and so a multiplicative approach to benefit estimation is often the most appropriate approach. Where competent analysts conduct a benefit: cost analysis of each project, this multiplication occurs as a matter of course, but where project assessment metrics are developed from a non-economic perspective, it may not. Even where one might expect that the need for multiplication is very obvious (such as accounting for the probability of project success) it does not necessarily occur. Indeed, it is possibly even more common to see benefits indices calculated using a weighted additive system, as follows.

pa = (w1 * a + w2 * b + w3 * g + w4 * d)/C (2)

where pa represents the benefit: cost ratio with benefits calculated additively and w1 to w4 are subjective weights defined by the decision maker (e.g. Hajkowicz and McDonald 2006). This weighted additive objective function is commonly used in Multi-Criteria Analysis (Mendoza et al. 1999, or see various articles in the Journal of Multi-Criteria Decision Analysis). Some projects include variables that should, in principle, be combined additively when calculating project benefits. For example, benefits might be calculated for several stakeholder groups and combined additively. Or it may be that a project provides financial benefits and environmental benefits that need to be calculated separately and then added together, such that benefits consist of a combination of additive and multiplicative elements (Hajkowicz and McDonald 2006). However, within each of these benefit segments, it is likely that most benefit-related variables will need to be multiplied rather than added to calculate the benefits for that segment. In general, the issue needs careful consideration and judgement by the analyst.

In this study, the starting assumption is that there is only one benefits segment (e.g. benefits are all financial or all environmental), so that Equation (1) is the appropriate measure of benefits. Later the alternative assumption is also examined.

In cases where benefits should be calculated multiplicatively, Equation (2) clearly provides erroneous estimates of the relative benefits of projects. For example, if Equation (2) is used, a project that should be given a low priority because it completely fails on an essential criterion (such as adoption) can erroneously be given a high priority because it scores well on other variables. Subsequent sections explore the potential seriousness of the error, in terms of total benefits foregone.

A second common error in formulating project assessment metrics is to omit crucial variables from the calculation of benefits. For example, in prioritising investments under Australia’s natural resource management programs (e.g. the Natural Heritage Trust), it has been common to fail to consider both the technical feasibility of the project and the likely adoption by land managers of proposed management changes. If a variable is omitted (say d), the equation used to calculate benefits would be as follows (with pp representing the assessment index with a partial set of variables):

pp = (a * b * g)/C (3)

or, if, in addition to omitting d, a weighted additive system is used,

pap = (w1 * a + w2 * b + w3 * g)/C (4)

Next, projects are sometimes ranked according to benefits, without consideration of costs. While this seems remarkable to an economist, it is apparently not uncommon in environmental management, as noted by Hajkowicz et al. (2007), Joseph et al. (2009) and Laycock et al. (2009). This implies ranking projects according to one of the following two equations:

pb = a * b * g * d (5)

or

pab = (w1 * a + w2 * b + w3 * g + w4 * d) (6)

Finally, there are likely to be errors, to some extent, in the estimation of the variables used to calculate benefits. Variables can be estimated more accurately, but usually at greater cost. The analysis here estimates the likely cost of inaccuracy through introducing random errors into the values of the variables.

3. Methods

Based on the above discussion, scenarios examined in the analysis are as follows:

(a) Multiplicative calculation of benefits (Equation 1), which is assumed to provide accurate prioritisation and is used as the benchmark;

(b) Weighted additive calculation of benefits (Equation 2);

(c) Partial inclusion of variables: 1, 2 or 3 variables included (Equations 3 and 4);

(d) Ranking projects considering project benefits but not costs (Equations 5 and 6);

(e) Errors in measurement: normally distributed errors in all variables, with coefficients of variation of 10, 20 or 30 per cent in each case; and

(f) Random project prioritisation.

The cost of errors in assessment for project prioritisation will depend on the degree of selectivity required. In a funding program where the available budget is sufficient to fund only a small proportion of the proposed projects, the cost of errors will be greater than for a program where most projects can be funded. To investigate the impact of project selectivity on the cost of assessment errors, various budget sizes are simulated: 2.5, 5, 10, 20 and 40 per cent of the budget that would be required to fund all projects. In environmental programs, selectivity is generally high. For example, in the 2009 round of competitive funding under the Caring for our Country program in Australia, around 5 per cent of proposed projects were funded, in 2002 the European LIFE Environment Program funded 23 per cent of proposed projects (EC 2002) and in 2003 the US Environmental Quality Incentives Program (EQIP) funded 17 per cent (USDA 2003).

The study involves a full factorial design, examining all combinations of scenarios (a) to (e) and five budget levels: 2 * 4 * 2 * 2 * 5 = 160 combinations, plus random project selection as a worst case for comparison.

For illustrative purposes, it is assumed that 100 projects have been proposed for funding. The number of projects that is actually funded depends on the assumed budget, ranging from 2.5 to 40 per cent of the full amount requested (see above). For each of the 100 projects, values for a, b, g and d are generated randomly from a uniform distribution (0–1). For cases where weighted additive benefit scoring is used, weights are also generated randomly from a uniform distribution (0-1). Project costs are assumed to be imperfectly correlated with project benefits (R = 0.6).

The procedure for the analysis is as follows.

1. For 100 simulated projects, generate random values for all relevant variables and parameters.

2. Calculate the benefit: cost ratio for each project using Equation (1).

3. Select those projects with the highest benefit: cost ratios that fit within the program project.

4. For the marginal project, assume that it is funded to the extent that the budget allows, and that its benefits are proportional to the funding it receives. All other projects receive either full funding or no funding.

5. Record the benefits for funded projects and total them.

6. Repeat steps 2 to 4 using an alternative prioritisation metric. Also repeat step 5, but use the correct measure of benefits from step 2, not from the alternative metric, to calculate total benefits from the projects selected using the alternative metric.

7. Comparing the total benefits from the two instances of step 5, calculate the cost (in percentage terms) from using the alternative prioritisation metric.

8. Repeat steps 1 to 7 1000 times to generate a frequency distribution of results.

9. Repeat steps 1 to 8 for all combinations of scenarios (b) to (f) and for five levels of program budget.

The spreadsheet used to undertake calculations is available here.

3. Results and Discussion

3.1 Weighted additive scoring

Figure 1 illustrates a single randomly generated case, comparing BCRs for a set of 100 projects calculated multiplicatively (using Equation 1) with scores for the same projects if they are scored using a weighted additive system (Equation 2). In this example, there are some projects for which Equation (2) is highly inconsistent with Equation (1). Overall, there is a modest correlation between the two metrics (R = 0.40). For each random simulation of the same situation, the graph looks somewhat different.

 

Figure 1. Scattergram illustrating correlation between benefit: cost ratios calculated correctly (Equation 1) and weighted additive scoring (Equation 2). For this randomly generated case, R = 0.40.

 

Figure 2 shows the project rankings corresponding to the scores in Figure 1. The correlation between the two ranking systems is low: R = 0.27. The two dashed lines in Figure 2 show the threshold ranking for project funding, assuming that the program has sufficient funds to support approximately 20 per cent of projects. [I say approximately 20 per cent because cost itself is a random variable that is generated for each project. Therefore the budget required to fund all projects is itself a random variable. For the simulations in Figures 2 and 4, the program budget is set at 20 per cent of the mean of that distribution.] Lower rankings correspond to superior projects (higher BCR scores).

The dashed lines divide the area of the graph into four sections. The upper right section contains those projects that are ranked poorly by both metrics – these projects would not be funded using either approach. The bottom left section contains projects that are ranked favourably by both metrics.

 

Figure 2. Project rankings for the example illustrated in Figure 1. Dotted lines show cut offs for funding under each criterion, lower rankings more preferred, with a program budget of 20% of the level required to fund all projects. R = 0.27, cost of poor prioritisation = 41%.

 

The upper left section has projects that are ranked favourably by the multiplicative system and unfavourably by the weighted additive system. The lower right section has projects that are ranked unfavourably by the multiplicative system and favourably by the weighted additive system. The latter two sections introduce errors into the prioritisation process. For example, in this simulation, the weighted additive system leads to funding of some projects with a very poor rankings under the multiplicative metric (five projects ranked below 70 out of 100). Overall, the total benefits of projects selected for funding using the additive metric (Equation 3) are 41% less than the total benefits of projects prioritised using the multiplicative metric. Thus, if the multiplicative metric is correct, 41% is the cost of using an additive metric in this simulation.

3.2 Omitted variables

Figures 3 and 4 show equivalent outputs for the case comparing two multiplicative metrics: one that includes all four relevant variables (Equation 1), and one omitting one variable (Equation 3). The correlation between results for Equations (1) and (3) is higher than for the previous example : R = 0.66 for this single simulated example (Figure 3). As a result, the correlation between project rankings is also higher than in the previous example: R = 0.84 (Figure 4).

 

Figure 3. Scattergram illustrating correlation between benefit: cost ratios calculated correctly (Equation 1) and omitting one crucial variable (Equation 3). For this randomly generated case, R = 0.66.

 

Figure 4. Project rankings for the example illustrated in Figure 3. Dotted lines show cut offs for funding under each criterion, lower rankings more preferred, with a program budget of 20% of the level required to fund all projects. R = 0.84, cost of poor prioritisation = 19%.

 

In this example there are fewer projects in the lower right section of Figure 4: projects that would not be funded based on the complete metric but would be funded using the partial metric. This means that the loss of potential value attributable to using the partial metric is less than in the previous case: 19 per cent.

The results in Figures 1 to 4 illustrate the output from steps 1 to 7 of the procedure (see Methods section), for two particular metrics, for a 20 per cent budget. Step 8 involves repeating the process 1000 times to generate a frequency distribution of the results. Figure 5 shows the frequency distribution of the percentage cost of using the alternative metric for the scenario illustrated in Figures 3 and 4. In the single simulation for Figures 3 and 4, the cost of poor prioritisation was 19 percent, but over 1000 simulations in Figure 5 the cost ranges from approximately 5 to 35 per cent, with a mean of 18 per cent. This mean cost is used below as the summary measure of the cost of poor prioritisation. In step 9, 1000 simulations are conducted for a large number of scenarios and the mean costs in each case are recorded and compared.

 

Figure 5. Frequency distribution (1000 random simulations) of cost of poor prioritisation for the scenario illustrated in Figures 3 and 4. Mean cost = 18%, standard deviation of cost = 6%, for 95% of cases cost < 29%.

 

Figure 6 shows how the mean cost of poor prioritisation varies as the program budget varies, in this case using Equation (3) as the alternative metric. The mean cost varies from 37 per cent if the budget is sufficient to fund only 2.5 per cent of projects, down to 9 per cent if the budget is sufficient for 40 per cent of projects. Clearly, the cost of using an inadequate scoring metric is sensitive to the level of program funding.

 

Figure 6. Relationship between budget level (% of full funding) and expected cost of poor prioritisation (mean of 1000 simulations), for the scenario illustrated in Figures 1 and 2.

 

Tables 1 to 4 show results from step 9 of the procedure: mean costs for many different scenarios. To provide another benchmark for comparison, Table 1 shows the mean costs of completely failing to prioritise. In these simulations, projects are selected at random with no input of information. The mean loss of potential benefits from funded projects ranges from around 50 to 80 per cent, depending on the program budget. Results for the various alternative metrics (Equations 2 to 6) should perform better than this, but not as well as Equation (1).

 

Table 1. Expected cost of random project selection relative to the optimum.

Budget limit

Random

2.5%

79%

5%

74%

10%

70%

20%

63%

40%

52%

 

Table 2 shows the expected costs of prioritising using Equations (2) to (4), covering: multiplicative scoring or weighted additive scoring; one, two, three or four variables; and five budget levels. In this table, it is assumed that decision makers correctly consider project costs, and that there is perfect knowledge about variables and parameters. The results for a single variable are the same for multiplicative or weighted additive systems, so they are only shown once.

 

Table 2. Expected cost of poor prioritisation (ranked by benefits/cost; no errors in estimation of parameters).

Rule:

Multiplicative

 

Weighted additive

Variables:

4

3

2

1

 

4

3

2

Budget

 

 

 

 

 

 

 

 

2.5%

0%

37%

53%

62%

 

55%

56%

57%

5%

0%

33%

49%

59%

 

52%

53%

56%

10%

0%

26%

45%

56%

 

49%

50%

53%

20%

0%

18%

38%

53%

 

44%

46%

48%

40%

0%

9%

24%

42%

 

36%

38%

39%

 

Reinforcing the results in Figure 6, mean cost is sensitive to budget, especially where a multiplicative system is used. The weighted additive system performed worse than the multiplicative decision rule, and it does not get much better as more of the relevant information is used (i.e. as more variables are included). Even if all four relevant variables are included in a weighted additive scoring system, the cost of poor prioritisation is high – more than 50% for lower budget levels. Even if the budget is large, the weighted additive system performs relatively poorly. Clearly, if the nature of the issue is such that benefits should be calculated multiplicatively, use of a weighted additive scoring system provides very poor information about the merits of alternative projects.

The above results are based on a case where the BCR should be correctly calculated multiplicatively and the weighted additive system introduces errors (the most relevant case in my judgement). Conversely, if the weighted additive system is correct and a multiplicative system is used incorrectly, the cost is somewhat lower, but still substantial. If all four parameters are included, the expected cost of the error would be 35, 35, 33, 32 and 29 per cent for the five budget levels (not shown in a table). Interestingly, the cost is relatively insensitive to budget in this case. Overall, it is important to carefully consider whether variables should be added or multiplied because getting this wrong makes a large difference to the quality of decision making.

3.3 Project costs ignored

Table 3 shows a similar set of results for the case where projects are ranked without consideration of their costs. If this is the only error made, then the mean costs of that error alone range from 5 to 32 per cent (see column 2). Particularly for low budget levels, this error is costly. If, in addition, one or more variables is omitted, ignoring costs does not greatly change the cost of poor prioritisation (compare Tables 2 and 3). Further, if weighted additive scoring is used, the cost of poor prioritisation actually falls somewhat as a result of ignoring project costs (compare Tables 2 and 3). Nevertheless, it is still much better to avoid all of these mistakes – they by no means fully cancel each other out.

 

Table 3 Expected cost of poor prioritisation (ranked without considering project costs; no errors in estimation of parameters)

Rule:

Multiplicative

 

Weighted additive

Variables:

4

3

2

1

 

4

3

2

Budget

 

 

 

 

 

 

 

 

2.5%

32%

42%

53%

67%

 

38%

46%

56%

5%

27%

36%

47%

60%

 

33%

40%

49%

10%

19%

28%

40%

53%

 

26%

34%

42%

20%

11%

20%

30%

44%

 

19%

26%

33%

40%

5%

11%

19%

31%

 

13%

18%

23%

 

3.4 Errors in data

Table 4 is similar to Table 3 with the addition of errors in parameter estimation (based on 20% coefficient of variation for all variables). In this case, the additional impact of errors in variables is minor – 5 per cent or less in most cases (compare Tables 3 and 4).

 

Table 4 Expected cost of poor prioritisation (ranked without considering project costs; errors in estimation of parameters)

Rule:

Multiplicative

 

Weighted additive

Variables:

4

3

2

1

 

4

3

2

Budget

 

 

 

 

 

 

 

 

2.5%

38%

47%

56%

68%

 

45%

52%

59%

5%

31%

39%

49%

62%

 

40%

47%

53%

10%

23%

32%

43%

55%

 

32%

40%

47%

20%

14%

23%

34%

47%

 

25%

31%

38%

40%

7%

13

21%

33%

 

16%

21%

26%

 

Other simulations were conducted to examine scenarios where the coefficients of variation of data errors are lower (10 per cent) or higher (30 per cent). These simulations assumed that project costs were correctly included. Results are shown in Table 5. Columns 2 and 3 of the table show results if errors in data are the only problem (i.e. the assessment uses a multiplicative scoring system including all four relevant variables). For an error CV of up to 20 per cent, the cost of data errors is no more than 14 per cent. Even for the lowest data accuracy (30 per cent CV of errors), the cost is only 23 per cent under the most limiting budget conditions.

 

Table 5. Expected cost of poor prioritisation (ranked by benefits/cost)

CV of errors in data

Cost for multiplicative model with 4 variables, 2.5% budget

Cost for multiplicative model with 4 variables, 40% budget

Average cost across all scenarios with missing variables or additive scoring

10%

5%

1%

0.2%

20%

14%

3%

1%

30%

23%

8%

2%

 

In Table 3 we saw that the cost of ignoring project costs is low if, in addition, the scoring metric has missing variables or is additive when it should not be. The same is true to an even greater extent for errors in data (column 4 of the table). When combined with errors in the assessment metric, the cost of errors in data are generally very minor. It is very interesting that accuracy in the estimation of variables is considerably less important than using rigorous methods to assess the information. It appears to be much more important to undertake a comprehensive economically defensible assessment than to focus primarily on the accuracy of data used.

4. Conclusion

In most realistic cases, the cost of poor prioritisation of projects due to use of inappropriate metrics is between 30 and 60 per cent of the potential benefits from investment. Even if the information used to underpin decisions is perfectly accurate, relatively minor errors in the procedure used to score and rank projects can make very large differences to the quality of decision making. Indeed, some of the decision rules investigated performed only a little better than completely uninformed random allocation of funds among projects.

For budget levels of five to 10 per cent, omitting variables, inappropriately using weighted additive scoring or failing to account for project costs each make a large difference to the achievement of benefits of investment. Investors need to focus on undertaking all of these elements correctly. On the other hand, using accurate data in calculation is much less important than the way that the data is combined to prioritise projects.

REFERENCES

Connor, J.D., Ward, J.R. and Bryan, B. (2008). Exploring the cost effectiveness of land conservation auctions and payment policies, Australian Journal of Agricultural and Resource Economics 51, 303–319.

European Commission (2002). LIFE – Environment Projects. European Commission, Brussels.

Hajkowicz, S. (2009). The evolution of Australia’s natural resource management programs: Towards improved targeting and evaluation of investments, Land Use Policy 26, 471-478.

Hajkowicz, S. and McDonald, G. (2006). The Assets, Threats and Solvability (ATS) model for setting environmental priorities, Journal of Environmental Policy and Planning 8(1), 87-102.

Hajkowicz, S., Higgins, A., Williams, K., Faith, D.P. and Burton, M. (2007). Optimisation and the selection of conservation contracts, Australian Journal of Agricultural and Resource Economics 51(1), 39-56.

Improscio, D.L. (2003). Overcoming the Neglect of Economics in Urban Regime Theory, Journal of Urban Affairs 25, 271-284.

Joseph, L.N., Maloney, R.F. and Possingham, H.P. (2009). Optimal allocation of resources among threatened species: a project prioritisation protocol, Conservation Biology 23(2), 328-338.

Laycock, H., Moran, D., Smart, J., Raffaelli, D. and White, P. (2009). Evaluating the cost-effectiveness of conservation: The UK Biodiversity Action Plan, Biological Conservation (forthcoming). doi:10.1016/j.biocon.2009.08.010

Mendoza, G.A., Mcoun, P., Prabhu, R., Sukadri, D., Purnomo, H. and Hartanto, H. (1999). Guidelines for Applying Multi-Criteria Analysis to the Assessment of Criteria and Indicators, The Criteria & Indicators Toolbox Series 9, Center for International Forestry Research, Jakarta, http://www.cifor.cgiar.org/acm/methods/toolbox9.html [accessed 8 September 2009].

Pannell, D.J. and Ridley, A.M. (2008). Lessons from dryland salinity policy experience in Australia, Proceedings, 2nd International Salinity Forum: Salinity, Water and Society – Global Issues, Local Action, 31 March – 3 April 2008, Adelaide (CD-ROM).

Peterson, D.L., Silsbee, D.G., Schmoldt, D.L. (1994). A case study of resource management planning with multiple objectives and projects, Environmental Management 18(5), 729-742.

Possingham, H. (2009). Five objections to using decision theory in conservation and why they are wrong, Decision Point Issue 26, March 2009, pp.2-3, http://www.aeda.edu.au/docs/Newsletters/DPoint_26.pdf [accessed 8 September 2009].

Schilizzi, S. and Latacz-Lohmann, U. (2007). Assessing the performance of conservation auctions: an experimental study, Land Economics 83(4), 497-515.

Stoneham, G.V., Chaudri, V., Ha, A. and Strappazon, L. (2003). Auctions for conservation contracts: an empirical evaluation of Victoria’s BushTender trial, Australian Journal of Agricultural and Resource Economics 47, 477–501.

United States Department of Agriculture (2003). Financial Year EQIP Unfunded Application Information. USDA, Washington DC.

Wilson, K.A., Underwood, E.C., Morrison, S.A., Klausmeyer, K.R., Murdoch, W.W., Reyers, B., Wardell-Johnson, G., Marquet, P.A., Rundel, P.W., McBride, M.F., Pressey, R.L., Bode, M., Hoekstra, J.M., Andelman, S., Looker, M., Rondinini, C., Kareiva, P., Shaw, M.R., and Possingham, H.P., 2007. Conserving biodiversity efficiently: what to do, where, when, PLoS Biology 5(9), 1850-1861.

Citation: Pannell, D.J. (2009). The cost of errors in prioritising projects, INFFER Working Paper 0903, University of Western Australia. http://dpannell.fnas.uwa.edu.au/dp0903.htm


David Pannell home page

Copyright © David J. Pannell, 2009
Last revised: June 16, 2013.