Thomas Cool
Scheveningen, October 22 2001
cool / AT /,

JEL A0, C0



Metaprognostica arises when one studies forecasting behaviour. Deceitful forecasts can mean profits, and hence deception is a real problem for market forecasts. For governments, Public Choice studies the behaviour of governmental institutions and officials while assuming the hypothesis of economic ‘selfish’ rationality, and hence there can be deceitful forecasts in government too. For academic forecasters who live by the scientific code, metaprognostics is an exercise in humour. Traditional prediction theory, that assumes no deceit, is a rich source for possible avenues for deceit, and it also provides a benchmark for metaprognostics. Metaprognostics provides an additional argument for the proposition that a democracy is well served by an Economic Supreme Court.


Metaprognostica arises when one studies forecasting behaviour. A key objective in metaprognostics is to see whether a forecaster had certain preferences or biases and whether the value structure can be recovered from the forecast ‘errors’.

For forecasting firms operating in the market place, deceitful predictions can mean bigger profits. Hence, there are incentives for deceit. For governments, Public Choice studies the behaviour of governmental institutions and officials while assuming the hypothesis of economic ‘selfish’ rationality. Obviously, there are incentives that can lead to deceit in government too. Hence, assuming no deceit can be too simple.

In prognostics, the intentions are given and the aim is to generate the prognosis; in metaprognostics the prognosis is given and the aim is to recover the intentions. Both prognostics and metaprognostics have a keen eye on the relevant aspects of prognostic behaviour, but the exercise becomes metaprognostics when one studies the ‘whole situation’ and wonders what the forecasters are actually doing. Of course, when the results of that study are used for forecasting again, then metaprognostica collapses into prognostica again. The label ‘meta’ thus has a limited and relative meaning. But it is useful to employ the term, since it highlights the objective to think about forecasting.

I discussed the issue of metaprognostics first in a draft written in October 1983 at the Dutch Central Planning Bureau (CPB). I thank the then director of the CPB, Van den Beld, for encouraging comments, in particular on the question whether consumer forecasts of the Bureau actually affected consumer outcome. Similar thanks go to colleague Sybesma, back then, for the fun discussion on possible sources for deception. A prime conclusion of that first draft was ‘that metaprognostics is even more complex than normal prediction theory’. Forecast errors can also be caused by bad technique, and one has to distinguish plain errors from efforts of ‘sentiment management’ or willful deception. As it is one of the conventions at the CPB that publications should have a quantitative part, i.e. not just formulas but also empirical relevance, there was a data problem in 1983 for the model that I had developed. The draft went into a drawer, and I have not considered it since. Now, however, in 2001, I want to discuss the recent paper by the present CPB director Don (2001) "Forecasting in macroeconomics: a practitioner’s view". I am no longer at the CPB and I can freely use the logical framework developed earlier. My text comes from a little editing of my 1983 draft with relatively few new additions, though the section on Don’s paper of course is wholly new. I will not present all formula’s, since this leads too far for my present purposes. With Don, I agree that the CPB is a key place for contemplation on the issue of forecasting, and that it is useful that the issue receives attention in a wider circle. Don gives a very good presentation from the standard point of view. By ‘the standard’ I mean in particular Klein (1971), "An essay on the theory of economic prediction", not explicitly referred to by Don, but his article covers the same material for the purposes of the following discussion. The point is that there is also another point of view. The key issue is that forecasting is a human enterprise. Errors can be mathematically attributed to variables and models, but it are people who selected the variables and created the model. The human influence on forecasts and their errors is much greater than one might think when reading Don’s paper. Scientific safeguards for proper prediction are a key issues. My 1983 analysis eventually leads to my position in Cool (2000), that a democracy is well advised to create a constitutional Economic Supreme Court with an explicit scientific statute.

It is proper and useful to also start with the quote of Van der Geest (1982) who triggered my 1983 draft on metaprognostics:

"It is clear that in economic policy a much larger weight is attributed to the exact outcomes of economic forecasts than is justified on the account of the data. Any economist who knows how data are collected, how economic models are constructed and how forecasts are generated with these, also knows the large uncertainties to which these forecasts are subjected. Even though the Central Planning Bureau repeatedly points at the error margins of its forecasts, in the presentation of the results yet often a large extent of exactness is suggested. For nobody is craving for the forecast that the budget deficit will be between 7% and 12%; only when we hear that it will be 9.75% we believe us to be informed. Numbers suggest objectivity and science. Where exact numbers are presented, it is quickly presumed that the accompanying recommendations will just as well be well founded. Nothing is less true. (...) Hence, there is sufficient cause for a healthy distrust against numbers. And this holds even stronger for economic forecasts. By unforeseen external circumstances the forecasts are uncertain, in addition, by errors in the model specifications, they are unreliable. But also the suppliers of the forecasts can consciously manipulate the information to serve their own goals. Politicians, officials, ministries and the Planning Bureau namely all play their own little game. Therefore you are warned. Don’t let your common sense be outnumbered." (my translation)

In a typical (ex ante) prediction situation, the result r is not known, especially since the prediction p is intended as its substitute; and hence the common notions of lying and deceit don’t fully apply since these assume something known to be true. Hence we must use the complex notion of ‘true prediction’ t, so that the prediction error e = p - r is composed of the truthfulness error f = t - r and the deceit d = p - t, so that e = f + d. The distinction is difficult but useful. A Planning Agency (henceforth Agency) in some arbitrary country, say Nadirland, shouldn’t be judged only by its prediction error performance, for it may be that its predictions are soley based on its easy wishes and biases. Prediction theory - the standard part of metaprognostics - investigates the case with d = 0 or p = t. It is more general to allow for d ¹ 0. 

Standard theory has the error analysis with re-estimated model ex ante (considering prediction conditional to predicted predetermineds) and ex post (considering prediction unconditional to realised predetermineds). It is more general to include deception where the error is a so-called ‘error’. Deceit may be decomposed in deceit in model and in predetermineds. Deceit patterns are to be determined with reference to the ex ante situations without re-estimation and not ex post.

Note that d = p - t is a specific kind of deceit. It can hardly be assumed that when the Agency predicts p, that everybody then will believe p. In general what people believe is a function b(p), with credibility gap g(p) = p - b(p). Interestingly, a true prediction need not be believed since it may that g(t) = t - b(t) ¹ 0, and we may define the ‘actual deceit’ as d(p) = b(p) - b(t). Theil (1958:156) mentions the possibility that the Bureau may adapts its p because of the pressure of a large g(p).

Matters can be more complex when the Agency develops an estimate b*(p) which of course differs from the true reaction function. Then the deceit can be decomposed in d = (p - b*(p)) + (b*(p) - b*(t)) + (b*(t) - t), with the ‘intended deceit’ given as (b*(p) - b*(t)) and the ‘estimated credibility gaps’ (p - b*(p)) and (b*(t) - t).

Prediction theory

Before continuing with metaprognostics, it is useful to recall prediction theory. Why do we predict ? The proper answer is given by the theory of decision making under uncertainty, with the standard statistical framework given by for example Ferguson (1967). In general, there is a loss function, and the decision maker gives a criterion, for example minimax loss. Note also my new definition of risk, see Cool (1999, 2001). Since decision making under uncertainty is so basic, this new definition is also included as chapters in Cool (2000) and Cool (2001).

A classic Neyman and Pearson (1933) quote, taken from McCloskey (1997), reminds us that determination of the loss function is crucial:

"Is it more serious to convict an innocent man or to acquit a guilty? That will depend on the consequences of the error; is the punishment death or fine; what is the danger to the community of released criminals; what are the current ethical views on punishment? From the point of view of mathematical theory all that we can do is to show how the risk of errors can be controlled and minimized. The use of these statistical tools in any given case, in determining just how the balance should be struck, must be left to the investigator." For a national government, the loss function would be a Social Welfare Function. Interestingly, since 1951 Kenneth Arrow’s Impossibility Theorem on a SWF generating mechanism has caused quite some confusion whether such a SWF would exist. Cool (2001) solves that confusion, and shows that the Agency could and should develop a SWF for its Principal, the national government. Suppose that the Agency has a solid reputation, and that its model and forecast are adopted by the national government, and that the government has some influence e.g. on its own outlays. Then the forecast will be rather accurate (especially judged ex post), but if the model is wrong the economy may well drop to a suboptimal state. Thus, to judge on the situation, we consider not only forecast performance but also co-ordination performance. To do so, we need a SWF. And, analysis of forecast errors only is not sufficient.

Usefulness of metaprognostics

Economic models generally specify what people believe or expect, since a definition of dynamic equilibrium is that expectations are fulfilled (instead of market clearing in statics). In that sense, the r, p and t variables themselves often represent beliefs. This would hold as well when there would be no co-ordinated forecasts. However, when an Agency comes along, and presents a prediction, then the beliefs are affected. A cynical point of view is that people are generally misdirected, more deceived than told the truth - or at least they tend to think that they are. Nobody can be trusted, ought to be trusted, not even the government - is widely believed.

Critical journalist Weaver (1994:97) has an excellent book with a balanced view on the role of the media in national debate, and he writes on the 1979 recession that was so dramatic in American history - where I personally doubt whether Paul Volcker really needed to lie:

"Volcker lied because the news genre gave him no real alternative. He was a newsmaker, and his actions with respect to monetary policy would therefore be covered according to the crisis-and-emergency response scenario. Had he told the truth and admitted he was engineering a recessionary tightening of the money supply, the headlines would have quickly destroyed the acquiescence his policy needed to succeed. The press would have told a story, not of a dramatic anti-inflation policy demarche, but of an infamous pro-recession policy." Even science, that rather sacrosanct branch of activity with its aim and claim of truthfulness and reliability, must be doubted, especially when lined up with some branch of government. In this respect, economic forecasting will be no different from other forms of data publishing. In fact, the case of economic forecasting will generally be subject to stronger doubts than the so-called statistics of the past. While many will adhere to the view that most people are generally misled or misinformed, only a few will arrive at the logical conclusion that this concerns themselves too; but in the case of forecasts this conclusion is easier attained, and almost everybody believes himself or herself misled or misinformed.

Take the case of Cyril Burt who apparently adapted his IQ ‘data’ to fit his theories. One ‘common sense’ notion is that his case is the exception that proves the rule. Another ‘common sense’ notion however is that a case like Burt’s is just the top of an iceberg that looms below and that in its vastness contains also the simple human errors and inadequacies, while secondly only fools allow themselves to be cheated over and over again. On a priori grounds, both approaches are equally likely; and on empirical grounds deceit may occur less frequent but then it can be a risky affair. With lying and cheating such recurrent and interesting phenomena, it is only natural that theories be developed that explain and that give guidance for practice. Indeed, in the case of economic forecasts, it is towards big governmentally operated planning and forecasting agencies that a wide-spread and deep-running suspicion is directed, so that it is useful to develop a scientific, theoretical if not empirical, basis for it.

The Van der Geest quotation above highlights a point. It is not only an example of the (professional) distrust of numbers, but it must also be noted that the utterance of such a distrust apparently is required. We feel not bored by its utterance, even though we have been educated ‘not to trust numbers’. We simply tend to forget what we learned. Hence, daily practice is rather schizophrenic, in that we trust and don’t trust numbers. Numbers and in particular forecasts are subject to uncertainty, and thus it is correct to be critical of them; but then this is forgotten for convenience or perhaps even by psychological necessity; and hence it is not boring to be reminded of our too faithful trust in numbers.

Socio-political circumstances affect the situation. The schizophrenia can also be expressed as indifference, or the neglect of uncertainty. When a governmental agency publishes a forecasts, a common reaction in The Netherlands is indifference, which is quite understandable, since an informed reaction is difficult and costly, and anybody who seriously investigates the normally complex problem at hand will be subjected to great doubts. Indifference is also a common reaction when the realisations are in, and the deviations from the predictions can be determined. Ideally, the predictions would be subject to serious questioning, and the deviations would need to be explained. The US Congress indeed has hearings on the economy, and that indeed is a better situation than in The Netherlands. The indifference solution to the schizophrenia seems rather destructive to the practice of prediction, and may forster bad and lazy pragmatic behaviour. Anybody with a heart for economic debate would rather like a situation in which the schizophrenia is out in the open.

Note that theory suggests a scientific solution to this schizophrenia. As textbook standards have it, we put 95% units of chance (however measured) on the realisation occurring within some interval, and 5% on its falling outside of it. Truly, when politicians and the public won’t learn to think statistical, they may indeed end up close to real schizophrenia. However, a 95% versus 5% statistical approach is not sufficient for a solution of the issues involved in prediction. 

The basic issue is what the forecasting Agency aims with its forecasts. When no coherent theory on this subject is developed - while statistics functions as an excuse for error and uncertainty - it may well be that the trust or distrust of the Agency’s predictions remains a wholly emotional matter - which actually may also prevent that politicians and the public start studying economics and statistics at all. Metaprognostics helps to emphasise the need for a Social Welfare Function, and the usefulness of the national discussion within that framework.

As said, it does not help to presume that the Agency is scientific, and adopt d = 0 as an axiom. A critical mind also allows for the alternative, and for tests on that assumption.

Some fundamental issues

(i) Economic theory itself poses the hypothesis of the selfish individual. People can have the desire to have people believe something, whether that something is true or not. As well as this is understood in marketing, it is commonly neglected in other parts of economics. Metaprognostics assumes that it may be the case in forecasting. Intentions of the forecaster, the prey which metaprognostica is after, should be compared with the unknown parameters in economic relationships, which cannot be observed directly either. Metaprognostica is primarily concerned with estimation, not prediction. Of course, when the intentions are recovered, then a forecast can be made how forecasters will behave, etcetera, leading to levels of complexity, or circular reference with fixed points. Also, the basic notion ‘that people have the desire to have people believe something’ rather should be interpreted as ‘to affect a distribution of beliefs’ than as ‘to convince everyone of a number’. This is game theory, with perhaps a Bayesian approach.

(ii) A question concerns restrictions. Someone who doesn’t feel restricted can lie and cheat as much as needed for reaching his or her goal. But in a dynamic situation social relations grow tensed once the reputation of a liar has been established. The Liar Paradox shows that such a reputation cannot be revoked easily - namely, the statement "I did lie, but won’t do it again" sounds very much like "I lie sometimes" (and then applied to itself). In general, internalised and practiced moral commitments force people to be satisfied with ‘less than optimal’ results - meaning that the goal is not sacrosanct, and that the way how results are achieved enters the utility function too. Cohon (1978) is an example of multicriteria optimising. There is an interchange between objectives and restrictions. Moral restrictions, as different from natural restrictions, can be regarded as objectives set at a certain level. Satisficing behaviour can be seen similarly. 

(iii) The desire to have people believe something may be either a conscious or a subconscious happenstance. This holds in general for the whole optimising setting. It is also possible that something is consciously regarded as an objective, which is only a restriction to some other subconscious objective. An example is that ‘good predictions’ are the official goal of some Agency, but where the prediction behaviour is only some restriction to other objectives. For example, the publication of scenarios may be more to the service of the other bureaucracies, but it is also easier and less demanding than proper prediction, while prediction would be the true test for real science that tries to find out how reality is.

(iv) Prediction theory itself already has been at pains trying to determine good prediction methods, and thus to eliminate value judgements and deceit from predictions and prediction analysis. This also means that standard prediction theory is very useful for the mirror view, in which these disruptive causes are located and their effects measured. Simply reading paragraphs on prediction theory and trying to imagine how this can be applied with deceit in mind, gives already a good introduction into metaprognostica. An issue, not discussed in standard prediction theory, however is that one may sometimes need to tell a lie in order to have suspicious people believe something; and obviously it would be irrational not to tell the lie if the result is wanted.

(v) Comparing just p and r may give a misleading picture. A single good prediction may be just a chance effect. A good average prediction record may also come from deceit turned sour. Something presented as a ‘prediction’ by some Agency may not be a true prediction in the sense of Klein, i.e. constructed by objective methods, since it may be just a prophecy. The correct criterion follows from adopting the viewpoint of the historian: what is documented is allowed, the remainder isn’t. Metaprognostics needs to see the models and data, and there is the basic econometric notion that economic insights must be objectified, i.e. put into the form of an empirical model, so that everybody has the chance to give his or her life more meaning by studying the model (and criticising it). The true prediction t is always the ‘true prediction conditional to the model’ t(m). There is a distinction between the model that the Agency claims to have used, the model that it really has used, and the model that it believes to be the true model. For example, the claim is that a macro model is used, but in fact a meso model has been used to determine autonomous adjustments, while the forecaster uses certain notions that are not explicit in some model. A Keynesian forecaster may for example give a monetarist explanation to suit the current fashion of the power elite, while actually using a timeseries extrapolation with some own additions. In some cases, the Agency may also be naïve, and believe its own predictions, but then be less conscious of what it is doing. Of course, recovery of hidden intentions becomes more complicated when there is no officially published complete model, or that what is published does not meet the standards for proper prediction. Metaprognostics of course becomes very complicated since official models change over time. The choice and change of the model may also be inspired by certain objectives.

(vi) Choosing the model ‘with the best fit and predictive power’ seems convincing, but there are some choices involved here. Estimation of parameters using a sample is something else than estimation of parameters using the error of out of sample predictions. Then there are the various estimation techniques. As Theil (1958) observed, the estimation technique must be consistent with the objectives of the Agency, but these may be ambiguous. In the long run, the true model generates t = r, so that all predictive error comes from deceit. But for the next 100 years we need not expect this, and there are serious problems that involve human choice. Choices must be made, by fallible human beings. The freedom of choice indeed gives much room for dispute between different schools with their different paradigms, especially when different variables are included and when simple predictive power is rated less than conformity to paradigm. And then there remains the fundamental critique that ‘purely mechanical’ estimation neglects human experience that can be very valuable - and one paradigm may well be close to the truth. Note that this position does not justify a belief in total anarchy, how much one would desire this. The empirical principle of ‘best fit’ still applies, though in a limited and practical sense. While communication between research communities may be low, each will aspire at better models and fit over time, and thus the ingrained empirical approach should result into convergence in the long run. Where it remains an assumption that also the quality of the data increases over time - vide the recent revisions of US inflation and the discussion about the measurement of productivity.

(vii) Prediction theory already recognises ‘expert opinion’. Ministries present their budget plans, and the forecaster has to form an opinion whether these will be truthfully enacted. The Agency may give a prediction ‘conditional to the budget’ but there still may be some selection as to what to believe. As Theil (1958:541) stated: "the level of government expenditure is badly predicted; and this in spite of the fact that the forecaster is a government agency!" The Agency may also recognise that the model lacks certain variables or interactions that are important - such as for example the influence of prediction on the result, see Van den Beld & Russchen (1965:14). The Agency may be so bold to plug this in without proper testing. Another example is collective wage bargaining generating data on new wages and working hours, so that some parameters should be changed, but one hasn’t time to do all that. Or there are all kinds of other qualitative concerns. It is also possible that the model is altogether statistically inferior but acceptable to the Agency for theoretical if not ideological grounds. For example, the 1980 CPB "Vintaf model" used a Leontief technology and thus denied any substitution between labour and capital other than by the addition and destruction of capital vintages. Any substitution was forced into this model, the model ‘worked’, and this model was popular for enhancing the policy of wage restraint by the Dutch government. Only later it turned out that the model did not ‘work’ anymore, and it was silently replaced - though the policy wasn’t. Another example has to do with the way the model has been formulated. I once plugged in a huge autonomous employment increase into a model, to see what would happen. I expected a similar number of layoffs within a few years, but this did not happen, since the model’s reaction functions simply assumed a particular convention of model use. It was a lesson well learned, but it also means that human knowledge in forecasting may be larger than some think. Another example is that the ‘Europe 1992’ policy initiative by the European Commission presumed an increase in productivity, but was vague about the implied loss of labour, for which then an educated guess was required. Finally, there is the defence for the use of exogenes. Basically, one should use lagged variables and physical exogenes only. Economic exogenes are introduced only since it is a shame not to use a priori insights. Expert opinion is quite valid from a metaprognostic point of view. It is sometimes said that the model primarily functions as a tool to make these opinions more consistent. Prediction is not a mechanistic operation. However, all this also allows for the possibility of deceit. The metaprognoser should ask for some evidence of expertise, and must check whether the use of a priori knowledge indeed is intersubjective and whether there is indeed sufficient evidence to back up the exogenous nature of the ‘exogenes’. Of course, there is always the selection of the experts, and what procedures they use to vote when they have different opinions - see Cool (2001) on Voting Theory. 

(viii) One criterion for a predictor is the variance. For example with inflation, at one time the deceit may be lower inflation, and at another time the deceit may be higher inflation, so that the average may be still on target. In the linear model, E(e) = 0, so that E(d) = 0 iff E(f) = 0, but this still leaves room for a forecast with larger than minimal variance. The Agency may think that the increase in the confidence interval is acceptable - which it does not publish anyway. 

(ix) To establish deceit, metaprognostics should be aware of the interaction with the creation of the data. A forecast of low employment may be followed by a statistical redefinition of unemployment. Though the awareness of the possibility of deceit should not lead to paranoia.

(x) These notions hold mutatis mutandis for policy analysis (prediction of policy effects) as differing from purely forecasting. ‘What if’ scenario’s are distinguished from forecasting since a policy need not be adopted and the whole matter may remain fictitious. However, there can be deceit in policy analysis as well, and though r need not be observed, there still is the d = p - t. Policy analysis can be a rich data source for the metaprognostic research.

(xi) Government policy making does not yet use techniques of maximising social welfare or for example goal programming. There generally is political bargaining based on rules of thumb. In a sense, bargaining may be the only proper way to find out what the coalition in power really wants. However, part of the fear may be that the impact of deceit could be greater, if an Agency is asked to determine optimal policy by using its model. Metaprognostica may help clarifying the issue of social welfare maximisation.

(xii) It would be a good policy of a truth aspiring Agency to prevent any misgivings about its intentions, by being clearer about the limitations of its predictions. The publication of confidence intervals of certain key variables nowadays (thus in 1983) no longer is a too restrictive requirement. If these approaches cannot reduce the distrust against a governmental Agency, then society might create different Agencies in order to benefit from competition. But if there are still few Agencies, there still is the problem of collusion. Other forms of regulation could be needed as well.

(xiii) A mentioned issue is how predictions affect the outcome, r = r(p). An example of ‘predictive selfreference’ can be a wage forecast that affects the wage bargaining process. This phenomenon actually should be part of the model, and the predictions should follow as ‘fixed points’. For example, the model relation used to be p = h(p’) for a function h of other variables p’; but the better relation could be p = ap + (1 - a ) h(p’) with a the degree of selfreference. The fixed point here easily solves as a linear transform, which makes the issue less exciting. But there can be other formats, for example with heteroscedastic error. Remember that the whole issue of prediction is to guide behaviour. The instrument for guidance here is information - but since behaviour is affected by information, this should be in the model. It is conceivable that the Agency thinks that its forecast does not affect reality, but that would be a conjecture that would need to be tested, and the Agency might suffer from the dogma that selfreference is not investigated.

Sociological aspects

Solow is reported to have said that when economists refer to sociology, there is the danger of bad economics. We may have to run that risk. 

(i) Above, the ‘true prediction’ t has the meaning of objective, scientific and relative to a model. An alternative is the introspective concept that p would be ‘true’ if the Agency would ‘feel honest’ about it - giving p = t. Asking for proof of the forecast would be close to questioning the Agency’s integrity. This notion must be rejected, since, apart from the problem of measurement of honesty, there still is the possibility that the Agency is not entirely informed on its own prejudices. This is not merely a matter of model selection, since the prejudice can remain when the proper model is shown and still rejected. A biased person can ‘feel honest’ about his or her prejudices. The motivation of ‘honesty’ need not be the only one, since there can be other desires such as adoption of the forecast by the government, and the Agency may end up with a ‘balance’ that metaprognostics shows to be biased. This also gives the methodological problem, that ‘honesty’ may not necessarily be established by looking at the track record. We already have identified the problem that a track record may look good for the wrong reasons.

(ii) A conjecture is that true predictions will be offered when the Agency desires to be regarded in the long run as truthful and reliable. This idea extends on earlier remarks. The idea is here: (a) That this desire results in the selection of the best models. (b) That these models have the best prediction performance. (c) That people look at this performance. (d) That they judge on truthfulness on the basis on performance, where truthfulness would predict the reliability of the new forecast. But these assumptions generally are too strong. We have already seen that the criterion of ‘best’ is ambiguous. Performance measurement is a difficult issue, as Don (2001) shows, so it may be vague what ‘best’ is while people have other considerations, such as reputation based upon sometimes curious causes - as I wrote in 1983: the emanation of wisdom from the appearance of the Agency director. Social psychology is relevant here, vide Aronson (1992). Above conjecture would be especially strong when we consider the market place for predictions, where predictions are sold by producers and bought by customers. But it has been observed that investment banks rather join the crowd instead of being the ‘odd one out’. So the issue remains ambiguous. 

(iii) This holds similarly for the variant that ‘if the Agency wants to be regarded as truthful in the long run, then it will show an improving prediction performance’. Indeed, we may hope for increased efforts and the advancement of science. But this conjecture doesn’t account for changing economic circumstances, that may make prediction more difficult. Nature is often cruel, gives the forecaster many surprises, and naturally also as time proceeds. The logical converse of the above is: "If it does not show an improving prediction performance, then the Agency does not want to be regarded as truthful in the long run" - and this clearly will not be accepted, though it is equivalent to the first conjecture. Anyhow, the relativistic attitude does not give an absolute measure, which is what we would really want.

(iv) Rather than considering the whole population, we might also be satisfied by a truthful image within a circle of knowledgeable colleagues. However, in practice this group may form an amorphous, powerless and hence negligible tribe. An ideal is perhaps: Young forecasters learn the trade from simple models, and the accuracy of their predictions has direct impact on their well-being. The most successful are hired by bigger forecasting firms, which are fiercely competitive, outbidding each other with ever smaller confidence intervals, and publishing all their models and data to impress prospective customers. In a sense this holds for life in general, since life concerns uncertainty, and the highest rewards should follow from accurate predictions. However, stock market predictions seem like a mess, and the market for national predictions is quite small.

On Don’s view (2001)

Don’s paper (giving his view) assumes d = 0, and hence it seems as if metaprognostics is not relevant for it. However, the metaprognostic approach is to question that assumption, and to look for evidence and proof. We can find various places in the paper where metaprognostic issues can be raised. Don’s paper has a ‘technical’ flavour since he discusses variables and exogenes as sources for error, and not human decision. He uses the term ‘macro-economics’, while one would prefer the term Political Economy for issues that deal with the management of the state. He also performs an error analysis for the short and medium term, but the longer term is treated with much less space and rigour, even though it is more crucial since it has more room for policy making and d ¹ 0. Given that forecasting is a human enterprise, the economic competence of forecasters is important, and it is even much more important for the longer run forecasts. In my view, Don mentions the human element but does not weigh it with the weight that it deserves. The ‘technical’ issues are important, as we will see below, but it would be more balanced if we also consider the human angle. Note that my use of the word ‘technical’ may in itself be wrong, since it may suggest that such a distinction could be made while neglecting the human angle. On the contrary, the human angle is also important for ‘technique’. An analogy is when patients die, and one determines the ‘technical’ cause as salmonella poisoning; but once it has been established that salmonella can kill and safety precautions are in place, then a death causes the next question which person has not properly executed a precaution, or whether the precautionary regulation is adequate.

(a) My main point: the idea of an Economic Supreme Court (a1) Don:155 correctly links forecasting to decision making under uncertainty. Don:161 and 171 uses the word ‘risk’. Critically, however, it must be noted that the CPB does not really apply decision making under uncertainty. Such an approach would require a Social Welfare Function, and the optimal policy would be derived at the CPB and communicated to the ministries. The model would also contain an endogenous government, where it is forecasted how the ministries will react to the forecast. It is precisely this analytical framework that causes me to conclude that a democracy requires a constitutional Economic Supreme Court, see Cool (2000) for its definition.

(a2) Don:158: "An unconditional forecast would necessarily imply an assumption on the behaviour of the decision maker, and thus obscure the decisions problem." The first is true, the second is not. My impression is that clients better understand unconditional forecasts, including a forecast of their own decisions, and I see this confirmed by what Don:174 says himself: "Policymakers have a strong tendency to choose a single scenario". Of course they would have all model information and be free to make up their own mind. Don:158 also mentions that the government requires a baseline forecast for the policy making process that uses that as a starting point. However, an Economic Supreme Court, with independent constitutional power, could well provide the unconditional forecast and that mentioned baseline as a variant.

(a3) Don:169-171 presents a numerical error analysis of the short and medium run, and concludes "For the very open Dutch economy, it comes as no surprise that forecasting the external variables is all important to the quality of the domestic forecast." However, the error should be 100% on external exogenes. That Holland has an open economy is not relevant. What is important is that the error on forecasting the government should be reduced. If the government says it will do A but it does B, then the CPB conditional forecast shows an error. With a conditional government forecast, the error can be from deception, or the error can be zero if the government imposes the forecasted value (but then the economy can be suboptimal). With an unconditional and independent forecast, democracy is properly informed.

(b) The human element is important for the idea of an Economic Supreme Court. Note that there is a distinction between academics, who have all the time and can resort to saying that they lack the knowledge, and policy advisers, who have to come up with results on short notice. A forecaster or policy analyst then is like a judge, who has to balance all kinds of aspects, and who has to look deep into his or her mind what the actual decision is. Models are tools, and the decisions on the forecasts are human made. Don seems to downplay that aspect, and I think that we should emphasise it. (b1) Don:155 correctly remarks: "As loss functions are usually unknown and certainty equivalence is unlikely to prevail, (...)" He also "(...) attacks the relevance of common statistical criteria for forecast quality and (...) stresses three non-statistical criteria: logical coherence, economic coherence, and stability." Well, since the CPB does not apply decision making under uncertainty, since it does not know the loss function, it is obvious that the statistical criteria are less relevant. The CPB supports the ministries in making their own decisions by providing model and forecast information. This only means that the human element becomes even more important. Don:167 concludes: "What the decision maker needs, is the distribution of the forecast error which includes the effects of model uncertainty. Even in a Bayesian setting that is a highly impractical demand." Ergo, an even larger human element.

(b2) Don:156 gives another explicit example of the human element: "communicating (...) sometimes on subjective probability assessments". Similarly, Don:168 "Usually more information is available and used, be it informally, in model selection and parameter choice. This information ranges from tested economic theory to a priori insights in what constitutes a plausible parameter value and what does not. Also, non-model information tends to be used in the actual preparation of a forecast."

(b3) Clients are very dependent on the advisor, as Don:157 remarks "The difference between conditional and unconditional forecasts (...) is often ignored" and Don:167: "I am afraid that the clients are bound to misinterprete the reported error margins". Hence a larger reliance on the human properties of the forecaster. (And unconditional forecasts are easier.)

(b4) Don:169 footnote 14 mentions how expert opinion has reduced forecast error - where there presumably is an independent definition who is an ‘expert’ (otherwise it would be circular). Don:172 on the long run scenario’s "These two sets rather informally attempt to capture a reasonable bandwidth (...) - and ‘informal’ and ‘reasonable’ clearly depend upon human expertise.

(b5) Don:173: "I claim that the estimated policy effects are more reliable than the forecasts." This is correct, but as explained above, also this type of policy advice requires expert quality, even though there is a model that is run on a computer.

(c) Forecast and policy advice errors by the CPB directorate have been made by ‘moral hazard’ (i.e. errors by humans rather than natural hazard). Bad management of human capital at CPB has resulted in lower quality and error, and d ¹ 0. Here I make a clear distinction between the CPB directorate since 1989 and the institute with its longer tradition stemming from Jan Tinbergen. First, I mention three other authors, and then give my own critique. (c1) The following is a useful point of reference before 1989. CPB by law has to prepare the Central Economic Plan that is established by the government. CPB prepares, but the cabinet decides. Van Sinderen (2001:738), referring to Rutten (1993:212), recalls that the policy making department at the ministry of Economic Affairs around 1980 no longer adopted the CPB forecast, since these were considered to be too optimistic. Circumstances lead to a flexible use of the law, which already is flexbible. But the quote itself also implies that forecasts are adopted regularly in other cases.

(c2) McCloskey (1997) criticises the econometric profession on the use of significance tests. Statistical significance is often confused with significance for policy making, and can lead to misspecification. This repeats the need for a Social Welfare Function. It can be noted that CPB has made the confusion, and has not developed a SWF.

(c3) The performance of CPB is also discussed in Dutch politics. Ad Melkert (2001), the candidate successor to Wim Kok, the current Prime Minister, says in a recent interview (my translation):

(1) "Consider decision making around the Disability Act in 1991. The number of people on disability was rising scaringly. Something had to be done, but I think that the way this was done could have been different. It happened all of a sudden, and I am convinced that the forecasts have been politicised to force ministers to show their cards. A climate of urgent necessity was created by the officials in the Bermuda triangle of the Treasury, Economic Affairs and the Central Planning Bureau." Note that the CPB director in 1991 was Zalm, the current minister of Finance.

(2) "What people often do not know, is that things go as follows these days. The Treasury determines what is about possible. The Central Planning Bureau determines the framework. But those can be wrong in one year for the full 100 percent. Yet, this is the basis on which policy is made." Note that Don’s paper tends to confirm the confidence interval.

Melkert concludes that things should change, and he might be open to the suggestion of an Economic Supreme Court: (3) "I really wonder whether we can continue with a national government that has been developed in a dated fashion. The handbook of the government that reinvents itself, has bypassed The Hague. (...) That must change." (c4) The CPB directorate claims since 1989 - with the arrival of a new director Zalm, Don’s predecessor - that CPB is a scientific institute, in service of the government. But CPB is not scientific, neither by its law of April 21 1947 nor in practice. It has no scientific statute, and no protection of the scientific status of individual workers. Perhaps the CPB directorate has created some range of independence from political meddling, but that does not make it scientific. Members of the CPB directorate tend to be invited to become professor at some university, but at the CPB they have a state position.

(c5) The CPB directorate fails in its co-ordination function at some crucial places. It is difficult to show this, since politicians are responsible for policy. However, the Dutch consensus style of policy making provides for a decent environment for policy advice, while the Dutch labour market is far from optimal (with say 20% of the working force on benefit). This is the co-ordination failure referred to above, namely: If the CPB directorate uses the wrong model to give advice, and this model and advice are adopted by politics, then the ‘forecast’ is accurate - but the economy drops to a suboptimal state. We need not only forecast performance but also co-ordination performance to judge on the overall performance of the CPB. Don:156 asks "Why do we forecast?" and gives the decision theoretic answer. But that means minimising a loss function or maximising a profit, and in general it would be maximal Social Welfare. Don cannot discuss the SWF, since it has not been developed. But we can note that Dutch Social Welfare is not maximised, due to unemployment and benefits. (See Cool (2000) for that proof.)

(c6) A practical example of failure in human resource management, and the errors caused by that, is that my analysis was blocked from discussion in 1989-1991. This analysis has been developed now in Cool (2000) on the political economy of unemployment and national decision making, and Cool (2001) on voting theory and the SWF. (a) A committee of scientists with professors Köbben and Segers has rebuked the CPB directorate for this blockage of discussion. Note that, though CPB is not a scientific institute, my official position was ‘scientific co-worker’, and hence there is a claim for scientific integrity in at least the work that I do. (b) Curiously, the CPB has a publication series ‘Under the responsibility of the author’ but the directorate still blocked my use of it (though I propose first a discussion of course). (c) The directorate has removed me from my official job, in April 1990, but a judge has destroyed that as an abuse of power. (Though by that time I already was fired. Note that I was put into a separate room and was not allowed the use of the mainframe - it were still mainframe days - so could not write a paper that used the model and data. But the directorate also uses the criterion that papers should use these.) (d) Don:172 refers to CPB (1992) - but I was in the project team, I was censored and abused, that study hence is of no scientific value, and it is also a shame that I was not allowed to speak at the international conference to express my protest and to point out the errors in that study. (e) The judge allowed my dismissal, but only since Dutch labour laws are lax. The directorate defends itself by saying that a judge allowed this dismissal, instead of criticising the laxness of labour laws from the scientific point of view. (f) It is very important to see that this whole affair started when I presented a new analysis. The directorate confuses the issue by referring to my personal functioning, saying that I can’t work in a hierarchy and in a team (which is rather inconsistent). I have proposed to clarify the issue by an independent investigation. The directorate refuses this. (g) Don himself was not present or directly involved with the crucial decisions of mismanagement in 1989-1991. Don was a subdirector then, involved in other issues. The key subdirector who was involved is deceased in the mean time, and the responsible director Zalm has moved on (to become minister of Finance). Now that Don is the director, he refuses to discuss the matter, and the process has a momentum of itself, with state lawyers inventing all kinds of new things. The judge tends to believe the state lawyers, since they represent ‘official position’. It have been the state lawyers who have delayed the matter, for example my waiting till 1998 to take the crucial decision that officially I am put back into my position at CPB. So now there is the strange situation that officially I have not been removed while I have been so in practice - and this is not investigated. Recently, the Dutch government has installed a committee for the ‘Integrity of State Governance’, but Don refuses to help put the issue before that committee with retroactive effect. His argument now is that the issue is too old and should rest - while it have been the state lawyers who caused the delay, while the issue still is in court, while there are all kinds of questions, while I really am victimised here for example since I have not found a steady job in ‘macro-economics’ since, and while my analysis on unemployment still is censored. (h) Econometrician Den Broeder and of recent also Professor Gill, mathematical statistics at Utrecht and KNAW, support my appeal for an investigation. (i) Hence, my protest against the censorship and abuse of power can be found at I can note that Aronson (1992) gives the example of Kitty Genovese, who was stabbed to death in 1964 in New York, while 38 of her neighbours in the apartment block came to look at their windows when she screamed around 3 AM. Nobody came to help, nobody even picked up the telephone. I can only ask the reader: please do something.

(c7) My case is not the only example. I may be the only one who puts a protest on the internet, but there have been more people for whom the CPB directorate has infringed upon science.

(c8) Don:172 states: "We had a hard time explaining why it makes sense to prepare for environment and infrastructure policies on a high-growth scenario, while keeping budgetary means constrained by a low-growth scenario." I don’t believe this. Dutch policy makers generally have a higher education, and will quickly grasp this. They may not want to do so, but that is a political issue. Hence Don likely confuses knowledge with political will, and since he will not do so consciously, it is an example of subconscious d ¹ 0.

(c9) On selfreference, Don:166 acknowledges "(...) it is true that current economic forecasts, in particular from CPB, are taken into account when the bargaining parties prepair their positions" and closes with "(...) it is not clear how to incorporate such influences into the forecasting exercise itself." Well, note the word "clear". It is a nice excuse not to say anything, since it truthfully is not clear how to do this. It would have been better when Don had elaborated on what is actually done, in the unclear situation. Elsewhere, Don has already admitted to the use of expert opinion. Possibly a wage estimate from expert opinion, that includes selfreference, can be fixed in the model, and the model can be run to determine the autonomous adjustment needed to correct for the independent model outcome. Obviously, some theses could be written on the subject of selfreference, but simply saying "it is not clear" is not communicative.

(c10) With data uncertainty, like how inflation or productivity have been measured, my inclination is to apply economics slightly different, namely to concentrate on ‘no regret’ policies whatever the data. See Cool (2000). My impression is that CPB is in cognitive dissonance here, though it may be that I pay too much attention to Don’s article here. 

(c11) Another issue, recently come to me, concerns economics and the environment. A key innovation in economics is the work by Hueting on Sustainable National Income, which exists for at least a decade now while an important recent publication is Van Ierland cs. (2001). Journalist Robles (1997) reported: 

"The speed by which CBS Statistics Netherlands disposes of Huetings research, can further be explained by bickering with its regular customer CPB, also connected to the Ministry of Economic Affairs. Hueting requires models from CPB’s stable. But the planning bureau has its focus on the future, while Hueting wants to index past years and thus has to rebuild the models for backcasting. CPB does not easily hand over that lucrative job. Especially not since it is under pressure to improve the quality of its own models, given the competition on the market for economic models. The statisticians at CBS acquiesce: modeling is not our strength." (my translation) Whatever this journalist makes of the official CPB reason not to take on the modeling exercise of Sustainable National Income - which reason is not known to me - it is known that CPB has been confusing the issue for quite a while, and that it indeed has refused to do the job. Which is strange, (a) since it is an issue concerning national income (‘macro-economics’), (b) since sustainability is an official national objective proclaimed by the government, (c) since, once Hueting’s SNI has been calculated for the past, we can expect a national demand for a forecast, and (d) since Don:172 discusses the CPB’s role in clarifying the ‘environmental challenges’ which directly concerns SNI. 

A note on McCloskey (1997)

McCloskey (1997) presents a curious argument. She considers the work by Tinbergen, Samuelson and Klein, praises this work, but subsequently labels it as three ‘vices’. I can only presume that this is a rhetorical trick run out of control. It is sorry that she tried this gimmick, since it makes for heavy reading. She criticises economists for making all kinds of errors as being seduced by those ‘vices’. McCloskey criticises economics as using irrelevant math. Empirical work abuses the significance test and prediction cannot lead to profit. And work on social improvement, in the Tinbergen tradition, does not respect freedom (for not being predicted, and making one’s own errors). Sometimes there are some good points, but her argument is unbalanced, and she also flirts inconsistency. While she correctly criticises the confusion about statistical significance, and correctly gives the solution that the loss function has to be used, she neglects that this means more mathematics ! What she means to say is that the use of mathematics should be wiser - more like T, S & K (and not less). Indeed, she emphasises that she enjoys and values math. Her position is that the Scottish Enlightenment, with Smith and Hume, should be our inspiration. T, S & K work exactly in that tradition, so they are an inspiration, and no ‘vice’. So the real problem is some later development. It would have helped a lot when she would have employed the terms ‘irrelevant math’ and ‘relevant math’ - and clarified how that distinction is to be made (also given that there is some advantage in specialisation). Apparently the incentives are wrong, but which incentives ? It is a valid question, as also Van Sinderen (2001) considers it as a serious problem. So, to be sure, we can value McCloskey’s discussion in general.

Somehow national planning and forecasting institutes like the CPB seem to get away from her criticism. They are empirical, and as long as they don’t overly abuse the significance test, and as long as policy is determined by the policy maker, an institute like CPB is safe for McCloskey. Oh, prediction is difficult, and no source of profit. "If you are so smart, why aren’t you rich ?" But prediction as an economic activity is acceptable to her, as she points to the market success of DRI who sells predictions but does not use them for own speculations on the stock market. In my view, this misses some main points: A national Agency has a key role in the preformance of economic science. The Principal-Agency problem may cause that the economy functions dismally. A suboptimal situation also means that there can be rich rewards. There are incentives: e.g. if the Agency does not listen to empirical work of universities, then they may lose interest. And prediction is important to test theories. Finally, prediction and social improvement like discussed by Tinbergen need not infringe upon freedom. The creation of an Economic Supreme Court would be advisable precisely for social improvement and freedom.


Public Choice studies the behaviour of governmental institutions and officials while assuming the hypothesis of economic ‘selfish’ rationality (though altruism can be included in selfishness too). Metaprognostica is the application of this approach to forecasting behaviour - though also for market firms. Above discussion benefits (1) first from a general formulation, dating from 1983 at the end of my first year at the CPB, and (2) secondly from a reasoned analysis on a constitutional Economic Supreme Court, dating from 1996 and improved in 2000. To a large extent metaprognostics is an exercise in humour, i.e. for the academic researchers who live by the scientific code. On the other hand, it is a real issue, as economic theory itself tells us, not only for market forecasts but also for government forecasts. Profit and status are important seductions. It has been shown that the CPB directorate lies and deceives. Don (2001) is a good paper on ‘technical’ issues, but it is unbalanced on prediction proper, while it consciously is silent on mistakes that have caused errors in forecasts and policy advice. He may have intended to write a paper on ‘technical’ issues only, and he has mentioned a score of issues where the personal quality of the forecasters is important. Yet, the whole remains an ‘oratio pro domo’. The paper fails since there is no convincing distinction between ‘technical’ issues and the people involved. An independent scientific enquiry into the censorship and abuse of science by the CPB directorate is advisable. Also, a democratic society is advised to have a constitutional Economic Supreme Court with a scientific statute.


Aronson, E. (1992), "The social animal", The Free Press

Beld, C.A. van den, and A. Russchen (1965), "Voorspelling en realisatie. De voorspellingen van het Centraal Planbureau in de jaren 1953-1963", CPB The Hague

Cohon, J.L. (1978), "Multiobjective programming and planning", Academic Press

Cool, Th. (1983), "Outline of metaprognostics", unpublished draft

Cool, Th. (2000), "Definition & Reality in the General Theory of Political Economy", First Edition, March & June 2000, ISBN 90-802263-2-7, listed as JEL 2000-1325, vol. 38, no. 4, December 2000, available from

Cool, Th. (1999, 2001), "Proper definitions for Uncertainty and Risk", included in the EconWPA archive of Economic Working Papers with reference ewp-get/9902002, and a recent version at

Cool, Th. (2001), "Voting theory for democracy", see

CPB (1992), "Scanning the future. A long-term scenario study of the world economy 1990-2015", Sdu The Hague

Don, F.J.H. (2001), "Forecasting in macroeconomics: a practitioner’s view", De Economist Vol 149, No 2, June, p155-175. An earlier version has been available at and likely still is

Ferguson, Th. (1967), "Mathematical statistics", Academic Press

Geest, L. van der (1982), "Ontcijferen", Economisch Statistische Berichten, p785

Ierland, E. van, cs. (eds) (2001), "Economic growth and valuation of the environment: a debate", conference book of the Hueting congres, Edward Elgar

Klein, L. (1971), "An essay on the theory of economic prediction", Markham, Chicago

McCloskey, D. (1997), "The vices of economists - The virtues of the bourgeoisie", Dutch translation, "De zondeval der economen", Amsterdam University Press

Melkert, A. (2001), "Interview", conducted by J. Hoedeman & W. van de Hulst, Volkskrant Magazine, May 5, p12-17

Neyman, J., and E.S. Pearson (1933), "On the problem of the most efficient form of statistical hypothesis", Philosophical transactions of the Royal Society, ser A. 231:289-337

Robles, M. (1997), "Oplaaiende ruzies over Groen Nationaal Inkomen", (Upflaring fights on Sustainable National Income"), Intermediair 13 maart, 33e jaargang no 11, p56-47

Rutten, F. (1993), "Zeven kabinetten wijzer", Wolters-Noordhoff

Sinderen, J. van, (2001), "Afscheid van de beleidseconomie", ESB September 28, p736-739

Theil, H. (1958), "Economic forecasts and policy", North Holland

Weaver, P.H. (1994), "News and the culture of lying. How journalism really works", The Free Press