If one asks HEP physicists ``what is probability?'', one will
realize immediately that they ``think they are'' frequentist.
The same impression is got looking at the books and
lecture notes they use
[13].
Particularly significant, to get an overview of ideas and methods
commonly used, are the PDG
[3]
and other booklets
[14,
15]
which have a kind of explicit (e.g.
[3,
14])
or implicit (e.g.
[15])
*imprimatur* of HEP organizations.

If, instead, one asks physicists what they think about probability
as *``degree of belief''* the reaction is negative and
can even be violent: ``science must be objective: there is no room
for belief'', or ``I don't believe something. I assess it. This is
no a matter for religion!''.

**3.2. HEP physicists ``are Bayesian''**

On the other hand, if one requires physicists to express their opinion about practical situations in condition of uncertainty, instead of just standard examination questions, one gets a completely different impression. One realize vividly that Science is indeed based on beliefs, very solid and well grounded beliefs, but they remain beliefs ``...in instrument types, in programs of experiment enquiry, in the trained, individual judgements about every local behavior of pieces of apparatus'' [16].

Physicists find it absolutely natural to talk
about the probability of hypotheses, a concept for which there is no
room in the frequentist approach. Also the intuitive
way with which they figure out the result is, in fact, a
probabilistic assessment on the true value.
Try to ask what is the probability that the top quark mass is
between 170 and 180 GeV. No one
^{(12)}
will reply that the question has no sense, since ``the top quark
mass is a *constant of unknown value*'' (as an orthodox frequentist
should complain).
They will simply answer that the probability is such and such percent,
using the published value and ``error''.
They are usually surprised if somebody tries to explain to
them that they ``are not allowed'' to speak of probability of a true value.

Another word which physicists find scandalous is ``prior'' (``I don't want to be influenced by prejudices'' is the usual reply). But in reality priors play a very important role in laboratory routines, as well as at the moment of deciding that a paper is ready for publication. They allow experienced physicists to realize that something is going wrong, that a student has most probably made a serious mistake, that the result has not yet been corrected by all systematic effects, and so on. Unavoidably, priors generate some subtle cross correlations among results, and there are well known cases of the values of physics quantities slowly drifting from an initial point, with all subsequent results being included in the ``error bar'' of the previous experiment. But I think that there no one and nothing is to blame for the fact that these things happen (unless made on purpose): a strong evidence is needed before the scientific community radically changes its mind, and such evidence is often achieved after a long series of experiments. Moreover, very subtle systematic effects may affect the data, and it is not a simple task for an experimentalist to decide when all corrections have been applied, if he has no idea what the result should be.

**3.3. Intuitive application of Bayes' theorem**

There is an example which I like to give, in order to
demonstrate that the intuitive reasoning
which unconsciously transforms confidence intervals into
probability intervals for the true value is, in fact, very close
to the Bayes' theorem. Let us imagine we see a hunting dog in a forest
and have to guess where the hunter is, knowing that there is a
50% probability that the dog is within 100 m around him.
The terms of the analogy
with respect to observable and true value are obvious.
Everybody will answer immediately that, with 50% probability,
the hunter is within 100 m from the dog. But everybody will also agree
that the solution relies on some implicit assumptions:
uniform *prior*
distribution (of the hunter in the forest) and
symmetric *likelihood* (the dog has no preferred direction,
as far as we know, when he runs away from the hunter).
Any variation in the assumptions leads to a different
solution. And this is also easily recognized by physicists,
expecially HEP physicists, who are aware of situations in which
the prior is not flat (like the cases
of a bremsstrahlung photon or of a cosmic ray spectrum)
or the likelihood is not symmetric (not all detectors have a nice
Gaussian response). In these situations intuition may still help a
qualitatively guess to be made about
the direction of the effect on the value of the measurand,
but a formal application of the Bayesian ideas
becomes crucial in order to state a result which is consistent with what
can be honestly learned from data.

The fact that Bayesian inference is not currently used in HEP
does not imply
that non-trivial inverse problems remain unsolved, or that
results are usually wrong. The solution often relies on extensive
use of Monte Carlo (MC)
simulation ^{(13)}
and on intuition. The *inverse* problem is then
treated as a *direct* one. The quantities of interest are
considered as MC parameters, and
are varied until the best statistical agreement between simulation
output and experimental data is achieved. In principle, this is
a simple numerical implementation of Maximum Likelihood, but in reality
the prior distribution is also taken into account in the simulation
when it is known to be non uniform (like in the aforementioned
example of a cosmic ray experiment). So, in reality what it is often
maximized is not the likelihood, but the Bayesian *posterior*
(likelihood × prior), and, as said before, the result is intuitively
considered to be a probabilistic statement for the true value.
So, also in this case, the results are close to those obtainable
by Bayesian inference, especially if the
posterior is almost Gaussian (parabolic *negative log-likelihood*).
Problems may occur, instead, when the ``not used'' prior
is most likely not uniform, or when the posterior is
very non-Gaussian. In the latter case the difference between
mode and average of the distribution, and the evaluation of the uncertainty
from the ``(log-likelihood)
= 1/2 rule'' can make quite a difference to the result.

^{12} Certainly one may find people
aware of the ``sophistication'' of the frequentist approach,
but these kinds of probabilistic statements are
currently heard in conferences and no
frequentist guru stands up to complain that the speaker is talking
nonsense.
Back.

^{13} If there is something in which HEP
physicists really believe, it is
Monte Carlo simulation! It plays a crucial role in all analyses,
but sometimes its use as a multipurpose brute force problem solver
is really unjustified and it can, from a cultural point of view,
be counterproductive. For example,
I have seen it applied to solve elementary
problems which could be solved analytically, like ``proving'' that
the variance of the sum of two random numbers is the sum of the variances.
I once found a sentence at the end of the solution
of a standard probability problem which I consider to
be symptomatic of this brute force behaviour:
``if you don't trust logic, then you can make a little Monte Carlo...''.
Back.