Just as much of modern
science has become self-serving in striving for status and funding,
the theory of how science should be done is similarly afflicted. An
assessment of a theory based on ‘degrees of belief’ might be useful
if scientists didn't routinely ignore, minimize or dismiss
falsifying evidence and twiddle the countless knobs on their models
to fit new data. The most glaring modern example of such behavior is
the rejection of stark evidence of intrinsic redshift of quasars.
Big bang cosmology is already lifeless by this assessment but
‘belief’ keeps the corpse warm. While we allow the few scientists
who judge the data according to their beliefs to control
publication, funding and press releases, real science is
On May 7 New Scientist published
"Do we need to change the definition of science?"
by Robert Matthews.
"Identified as the defining
characteristic of real science by the philosopher Karl Popper more
than 70 years ago, falsifiability has long been regarded by many
scientists as a trusty weapon for seeing off the menace of
>> Karl Popper
Viennese thinker has been lauded as the greatest philosopher of
science by the likes of Nobel prizewinning physicist Steven
Weinberg, while Popper's celebrated book The Logic of Scientific
Discovery was described by cosmologist Frank Tipler as ‘the
most important book of its century’.
though. Popper's definition of science is being sorely tested by
the emergence of supposedly scientific ideas which seem to fail
it. From attempts to understand the fundamental nature of
spacetime to theories purporting to describe events before the big
bang, the frontiers of science are sprouting a host of ideas that
are seemingly impossible to falsify."
It is not clear how people could
conclude that Popper "identified [falsification] as the defining
characteristic of real science" if they actually read
The Logic of Scientific Discovery. The
book is about the logic associated with the discovery of new ideas;
the title is not The Objective Characteristics of a Reified
Abstraction. He clearly presents looking for false entailments
as a convention. (That’s actually a quote from Popper on p. 37—
"convention.": Falsifiability " will accordingly have to be regarded
as a proposal for an agreement or convention." [Emphasis in
original]. That is, an agreement not to "adjust" a theory but to
consider any variation as an entirely new theory that must compete
with all available alternatives and to admit that the old version
The book is not so much about
science as about an attitude—an eagerness to discover and to test
new ideas rather than to defend an established dogma against life’s
inevitable changes. On the next page, Popper writes:
"Thus I freely admit that in arriving at my proposals
I have been guided, in the last analysis, by value judgments and
predilections. But I hope that my proposals may be acceptable to
those who value not only logical rigour but also freedom from
dogmatism; who seek practical applicability, but are even more
attracted by the adventure of science, and by discoveries which
again and again confront us with new and unexpected questions,
challenging us to try out new and hitherto undreamed-of answers.
The New Scientist article continues:
"Much of [Popper’s] appeal rests on
the clear-cut logic that seems to underpin the concept of
falsifiability. Popper illustrated this through the now-celebrated
parable of the black swan.
Suppose a theory proposes that
all swans are white. The obvious way to prove the theory is to
check that every swan really is white - but there's a problem. No
matter how many white swans you find, you can never be sure there
isn't a black swan lurking somewhere. So you can never prove the
theory is true. In contrast, finding one solitary black swan
guarantees that the theory is false. This is the unique power of
falsification: the ability to disprove a universal statement with
just a single example - an ability, Popper pointed out, that flows
directly from the theorems of deductive logic."
emphasis is on testing, and he repeats that it’s something
scientists decide to do. It doesn’t exist independently in the
(passive-voiced) objective world; someone does it (or, more commonly
these days, doesn’t do it). Popper’s idea isn’t "sorely tested" by
modern theories; modern scientists simply decided not to discover
new ideas: There are plenty of black swans swimming in the pond of
science; scientists just decided to define them as a different
species rather than to look for a new theory that accounts for black
Philosopher Colin Howson of the London
School of Economics in the UK "believes it is time to ditch
Popper's notion of capturing the scientific process using
deductive logic. Instead, the focus should be on reflecting what
scientists actually do: gathering the weight of evidence for rival
theories and assessing their relative plausibility.
is a leading advocate for an alternative view of science based not
on simplistic true/false logic, but on the far more subtle concept
of degrees of belief. At its heart is a fundamental connection
between the subjective concept of belief and the cold, hard
mathematics of probability…"
Comment: Here is the point of
departure from real science, where the perceived probability of a
belief being true determines the course of science.
This should sound familiar; after all,
it is what scientists do for a living. And it is a view of
scientific reasoning with a solid theoretical basis. At its core
is a mathematical theorem, which states that any rational belief
system obeys the laws of probability - in particular, the laws
devised by Thomas Bayes, the 18th-century English mathematician
who pioneered the idea of turning probability theory on its head.
Unlike Popper's concept of science, the Bayesian view doesn't
collapse the instant it comes into contact with real life. It
relies on the notion of accumulating positive evidence for a
Comment: It is
this kind of thinking that has allowed the big bang theory to
persist when it should have collapsed the instant it came into
contact with real life—the observations that highly redshifted
objects (quasars) are connected to low redshift galaxies. In simple
terms, redshift is not a measure of an expanding universe. We cannot
‘rewind’ time to a metaphysical ‘creation’ event—the big bang. What
has happened is not science. It has been a process of selectively
fitting the evidence to a belief in the big bang.
Such a belief is not rational and shouldn’t even qualify for
the Bayesian test.*
Astrophysicist Robert Trotta of
Oxford University rationalizes the Bayesian method,
"At first glance, it might appear
surprising that a trivial mathematical result obtained by an
obscure minister over 200 hundred years ago ought still to excite
so much interest across so many disciplines, from econometrics to
biostatistics, from financial risk analysis to cosmology.
Published posthumously thanks to Richard Price in 1763, "An essay
towards solving a problem in the doctrine of chances" by the rev.
Thomas Bayes (1701(?)–1761) had nothing in it that could herald
the growing importance and enormous domain of application that the
subject of Bayesian probability theory would acquire more than two
centuries afterwards. However, upon reflection there is a very
good reason why Bayesian methods are undoubtedly on the rise in
this particular historical epoch: the exponential increase in
computational power of the last few decades made massive numerical
inference feasible for the first time, thus opening the door to
the exploitation of the power and flexibility of a rich set of
Bayesian tools. Thanks to fast and cheap computing machines,
previously unsolvable inference problems became tractable, and
algorithms for numerical simulation flourished almost overnight...
Cosmology is perhaps among the latest disciplines to have
embraced Bayesian methods, a development mainly driven by the data
explosion of the last decade. However, motivated by difficult and
computationally intensive inference problems, cosmologists are
increasingly coming up with new solutions that add to the richness
of a growing Bayesian literature."
argument boils down to extolling the virtues of being able to play
computer games with the data more effectively in recent times. The
aim is to produce computer models that mimic as closely as possible
‘real life.’ However, cosmological models fail unless they introduce
imaginary black holes, dark matter and dark energy as ‘fudge
factors’ to match appearances. Once again, this is not science, it
is computer game playing. Judging by science news reports,
cosmologists are increasingly coming up with new science fiction
that will certainly add to the richness of the laughter at their
‘literature’ in future. Turning to Bayesian methodologies is
symptomatic of a disconnect from reality in the sciences.
The New Scientist article continues:
"Scientists begin with a range of
rival explanations about some phenomenon, the observations come
in, and then the mathematics of Bayesian inference is used to
calculate the weight of evidence gained or lost by each rival
theory. Put simply, it does this by comparing the probability of
getting the observed results on the basis of each of the rival
theories. The theory giving the highest probability is then deemed
to have gained most weight of evidence from the data."
idea of calculating "the probability of getting observed results on
the basis of each of the rival theories" may be of some use in
comparing small variations on initial beliefs, but it misconceives
the situation when different initial beliefs are involved. "
Observed results" are interactive with the theories that direct
observers about what to observe, how to observe it, what value to
put on it, and which way to interpret it. As a good illustration
Matthews quotes cosmologist Lawrence Krauss at Case Western Reserve
University in Cleveland, Ohio:
"You just can't tell if a
theory really is unfalsifiable." "[Krauss] cites the case of
an esoteric consequence of general relativity known as the
Einstein ring effect. In a paper published in 1936, Einstein
showed that the light from a distant star can be distorted by the
gravitational field of an intervening star, producing a bright
ring of light around it. It was a spectacular prediction but also,
Einstein said, one that astronomers stood 'no hope of observing',
as the ring would be too small to observe.
For all his
genius, Einstein had reckoned without the ingenuity of
astronomers, which in 1998 led to the discovery of the first
example of a perfect Einstein ring - created not by a star, but by
a vast galaxy billions of light years away."
the author had no idea that other "results" were possible:
multiple active galactic
nuclei ejections, plasma torus,
etc. The interactivity between theories and observations is present
in something as simple as observing an electron: are you looking at
a particle with momentum or at a charge comprising an electrical
current? Or at something no one has yet imagined?
>> The Einstein Cross. In the
mid-1980's, astronomers discovered these four quasars, with
redshifts about z = 1.7, buried deep in the heart of a galaxy with a
low redshift of z = .04. (The central spot in this image is not the
whole galaxy but only the brightest part of the galaxy's nucleus.)
When first discovered, the high redshift quasar in the nucleus of a
low redshift galaxy caused a panic. To save the redshift/distance
conviction, gravitational lensing had to be invoked despite Fred
Hoyle's calculation that the probability of such a lensing event was
less than two chances in a million! And there is little sign of the
expected reddening of the quasars’ light if it had passed so deeply
through the dusty spiral. A change in brightness of the quasars was
observed over a period of three years. Arp's explanation is that the
galaxy has ejected four quasars, which are growing brighter with age
as they move farther from the nucleus. The lensing explanation is
that the bending of the light varies when individual stars pass in
front of the quasar. If the lensing explanation were
correct, the quasars should brighten briefly and then fade as the
star moves out of alignment.*
prices by which you can compare the apples and oranges of different
initial beliefs. Probabilities incorporate the very initial beliefs
that scientists should be discovering and questioning. The theory
that is based on familiar assumptions will always calculate out as
more probable than the ones with unfamiliar assumptions. Bayesian
probabilities are little more than digitized familiarities. "Secure
knowledge" is the enemy of scientific discovery.
gets nowhere. "In the end," he still misses Popper’s point and stays
stuck in the conformist peer (reviewed) pressure that has all but
stopped progress: "empirical observations…decide if a theory gets
taken seriously." As if people had nothing to do with it. No,
scientists decide—to take seriously, to take for granted, or to
discover new combinations of data, ideas, and initial beliefs.
It seems that modern scientists will not learn from history.
They seem more opposed to unfamiliar theoretical options than in the
past, which will only be apparent to scientists of the future. The
Bayesian probabilistic evaluation of theories by those who choose
which theories to test and the importance of the data merely serves
to perpetuate this dysfunctional aspect of science. When the suspect
is the judge and jury the verdict is not real science.
[*emphasis added - DS]
With appreciation to Mel Acheson for his contribution.
to this article.
Public comment may be made on this article on the Thunderbolts
Forum/Thunderblogs (free membership required).
highly-acclaimed 60-minute video introduction to the Electric
Universe, see Thunderbolts of the Gods on Google Video.
Archives by Subject