Introduction

Twentieth Century physics and cosmology have revealed an astonishing path towards our existence, which appears to be predicated on a delicate interplay between the three fundamental forces that govern the behavior of matter at very small distances and the long-range force of gravity. The former control chemistry and hence life as we know it, whereas the latter is responsible for the overall evolution and structure of the Universe.

  • If the state of the hot dense matter immediately after the Big Bang had been ever so slightly different, then the Universe would either have rapidly recollapsed, or would have expanded far too quickly into a chilling, eternal void. Either way, there would have been no ‘structure’ in the Universe in the form of stars and galaxies.

  • Even given the above fine-tuning, if any one of the three short-range forces had been just a tiny bit different in strength, or if the masses of some elementary particles had been a little unlike they are, there would have been no recognizable chemistry in either the inorganic or the organic domain. Thus there would have been no Earth, no carbon, et cetera, let alone the human brains to study those.

Broadly, five different responses to the impression of fine-tuning have been given:

  1. 1.

    Design: updating the scholastic Fifth Way of Aquinas (1485/1286), the Universe has been fine-tuned with the emergence of (human) life among its designated purposes.Footnote1

  2. 2.

    Multiverse: the idea that our Universe is just one among innumerably many, each of which is controlled by different parameters in the (otherwise fixed) laws of nature. This seemingly outrageous idea is actually endorsed by some of the most eminent scientists in the world, such as Martin Rees (1999) and Steven Weinberg (2007). The underlying idea was nicely explained by Rees in a talk in 2003, raising the analogy with ‘an ‘off the shelf’ clothes shop: “if the shop has a large stock, we’re not surprised to find one suit that fits. Likewise, if our universe is selected from a multiverse, its seemingly designed or fine-tuned features wouldn’t be surprising.” (Mellor 2002).

  3. 3.

    Blind Chance: constants of Nature and initial conditions have arbitrary values, and it is just a matter of coincidence that their actual values turn out to enable life.Footnote2

  4. 4.

    Blind Necessity: the Universe could not have been made in a different way or order, yet producing life is not among its goals since it fails to have any (Spinoza 1677).Footnote3

  5. 5.

    Misguided: the fine-tuning problem should be resolved by some appropriate therapy.

We will argue that whatever reasons one may have for supporting the first or the second option, fine-tuning should not be among them. Contemporary physics makes it hard to choose between the third and the fourth option (both of which seem to have supporters among physicists and philosophers),Footnote4 but in any case our own sympathy lies with the fifth.

First, however, we have to delineate the issue. The FineTuning Argument, to be abbreviated by FTA in what follows, claims that the present Universe (including the laws that govern it and the initial conditions from which it has evolved) permits life only because these laws and conditions take a very special form, small changes in which would make life impossible. This claim is actually quite ambiguous, in (at least) two directions.

  1. 1.

    The FTA being counterfactual (or, in Humanities jargon, being ‘what if’ or ‘alternate’ history), it should be made clear what exactly is variable. Here the range lies between raw Existence itself at one end (Rundle 2004; Holt 2012; Leslie and Kuhn 2013) and fixed laws of nature and a Big Bang with merely a few variable parameters at the other (cf. Rees 1999; Hogan 2000; Aguirre 2001; Tegmark et al. 2006).

    Unless one is satisfied with pure philosophical speculation, specific technical results are only available at the latter end, to which we shall therefore restrict the argument.

  2. 2.

    It should be made clear what kind of ‘life’ the Universe is (allegedly) fine-tuned for, and also, to what extent the emergence of whatever kind of life is deemed merely possible (if only in principle, perhaps with very low probability), or likely, or absolutely certain. For example, should we fine-tune just for the possible existence of self-replicating structures like RNA and DNA,Footnote5 or for “a planet where enough wheat or rice could be cultivated to feed several billion people” (Ward and Brownlee 2000, p. 20), or for one where morally (or indeed immorally) acting rational agents emerge (Swinburne 2004), perhaps even minds the like of Newton and Beethoven?

    It seems uncontroversial that at the lowest end, the Universe should exhibit some kind of order and structure in order to at least enable life, whereas towards the upper end it has (perhaps unsurprisingly) been claimed that essentially a copy of our Sun and our Earth (with even the nearby presence of a big planet like Jupiter to keep out asteroids) is required, including oceans, plate tectonics and other seismic activity, and a magnetic field helping to stabilize the atmosphere (Ward and Brownlee 2000).Footnote6

    For most of the discussion we go for circumstances favoring simple carbon-based life; the transition to complex forms of life will only play a role in discussing the fine-tuning of our solar system (which is crucial to some and just a detail to others).Footnote7

According to modern cosmology based on the (hot) Big Bang scenario,Footnote8 this means that the Universe must be sufficiently old and structured so that at least galaxies and several generations of stars have formed; this already takes billions of years.Footnote9 The subsequent move to viable planets and life then takes roughly a similar amount of time, so that within say half an order of magnitude the current age of the Universe seems necessary to support life. In view of the expansion of the Universe, a similar comment could be made about its size, exaggerated as it might seem for the purpose of explaining life on earth.

Evidence for Fine-Tuning

Thanks to impressive progress in both cosmology and (sub) nuclear physics, over the second half of the 20th Century it began to be realized that the above scenario is predicated on seemingly exquisite fine-tuning of some of the constants of Nature and initial conditions of the Universe. We just give some of the best known and best understood cases here.Footnote10

One of the first examples was the ‘Beryllium bottleneck’ studied by Hoyle in 1951, which is concerned with the mechanism through which stars produce carbon and oxygen.Footnote11 This was not only a major correct scientific prediction based on ‘anthropic reasoning’ in the sense that some previously unknown physical effect (viz. the energy level in question) had to exist in order to explain some crucial condition for life; it involves dramatic fine-tuning, too, in that the nucleon-nucleon force must lie near its actual strength within about one part in a thousand in order to obtain the observed abundances of carbon and oxygen, which happen to be the right amounts needed for life (Ekström et al. 2010).

Another well-understood example from nuclear physics is the mass difference between protons and neutrons, or, more precisely, between the down quark and the up quark (Hogan 2000).Footnote12 This mass difference is positive (making the neutron heavier than the proton); if it weren’t, the proton would fall apart and there would be no chemistry as we know it. On the other hand, the difference can’t be too large, for otherwise stars (or hydrogen bombs, for that matter) could not be fueled by nuclear fusion and stars like our Sun would not exist.Footnote13 Both require a fine-tuning of the mass difference by about 10 %.

Moving from fundamental forces to initial conditions, the solar system seems fine-tuned for life in various ways, most notably in the distance between the Sun and the Earth: if this had been greater (or smaller) by at most a few precent it would have been too cold (or too hot) for at least complex life to develop. Furthermore, to that effect the solar system must remain stable for billions of years, and after the first billion years or so the Earth should not be hit by comets or asteroids too often. Both conditions are sensitive to the precise number and configuration of the planets (Ward and Brownlee 2000).

Turning from the solar system to initial conditions of our Universe, but still staying safely within the realm of well-understood physics and cosmology, Rees (1999) and others have drawn attention to the fine-tuning of another cosmological number called Q, which gives the size of inhomogeneities, or ‘ripples’, in the early Universe and is of the order Q ~ 0.00001, or one part in a hundred thousand.Footnote14 This parameter is fine-tuned by a factor of about ten on both sides (Rees 1999; Tegmark et al. 2006): if it had been less than a tenth of its current value, then no galaxies would have been formed (and hence no stars and planets). If, on the other hand, it had been more than ten times its actual value, then matter would have been too lumpy, so that there wouldn’t be any stars (and planets) either, but only black holes. Either way, a key condition for life would be violated.Footnote15

The expansion of the Universe is controlled by a number called Ω, defined as the ratio between the actual matter density in the Universe and the so-called critical density. If Ω ≤ 1, then the Universe would expand forever, whereas Ω > 1 would portend a recollapse. Thus Ω = 1 is a critical point.Footnote16 It is remarkable enough that currently Ω ≈ 1 (within a few percent); what is astonishing is that this is the case at such a high age of the Universe. Namely, for Ω to retain its (almost) critical value for billions of years, it must have had this value right from the very beginning to a precision of at least 55 decimal places.Footnote17

This leads us straight to Einstein’s cosmological constant Λ, which he introduced into his theory of gravity in 1917 in order to (at least theoretically) stabilize the Universe against contracting or expanding, to subsequently delete it in 1929 after Hubble’s landmark observation of the expansion of the Universe (famously calling its introduction his “biggest blunder”). Ironically, Λ made a come-back in 1998 as the leading theoretical explanation of the (empirical) discovery that the expansion of the Universe is currently accelerating.Footnote18 For us, the point is that even the currently accepted value of Λ remains very close to zero, whereas according to (quantum field) theory it should be about 55 (some even say 120) orders of magnitude larger (Martin 2012). This is often seen as a fine-tuning problem, because some compensating mechanism must be at work to cancel its very large natural value with a precision of (once again) 55 decimal places.Footnote19

The fine-tuning of all numbers considered so far seems to be dwarfed by a knock-down FTA given by Roger Penrose (19792004), who claims that in order to produce a Universe that even very roughly looks like ours, its initial conditions (among some generic set) must have been fine-tuned with a precision of one to 1010123, arguably the largest number ever conceived: all atoms in the Universe would not suffice to write it out in full.Footnote20 Penrose’s argument is an extreme version of an idea originally due to Boltzmann, who near the end of the 19th Century argued that the direction of time is a consequence of the increase of entropy in the future but not in the past,Footnote21 which requires an extremely unlikely initial state (Price 1997; Uffink 2007; Lebowitz 2008). However, this kind of reasoning is as brilliant as it is controversial (Callendar 20042010; Earman 2006; Eckhardt 2006; Wallace 2010; Schiffrin and Wald 2012). More generally, the more extreme the asserted fine-tuning is, the more adventurous the underlying arguments are (or so we think).

To be on the safe side, the fine-tuning of Ω, Λ, and Penrose’s initial condition should perhaps be ignored, leaving us with the other examples, and a few similar ones not discussed here. But these should certainly suffice to make a case for fine-tuning that is serious enough to urge the reader to at least make a bet on one the five options listed above.

General Arguments

Before turning to a specific discussion of the Design and the Multiverse proposals, we make a few critical (yet impartial) remarks that put the FTA in perspective (see also Sober 2004; Manson 2009). Adherents of the FTA typically use analogies like the following:

  • Someone lays out a deck of 52 cards after it has been shuffled. If the cards emerge in some canonical order (e.g., the Ace of Spades down to 2, then the Ace of Hearts down to 2, etc.), then, on the tacit assumption that each outcome is equally (un)likely, this very particular outcome supposedly cannot have been due to ‘luck’ or chance.

  • Alternatively, if a die is tossed a large number of times and the number 6 comes up every time, one would expect the die to be loaded, or the person who cast it to be a very skillful con man. Once again, each outcome was assumed equally likely.

First, there is an underlying assumption in the FTA to the effect that the ‘constants’ of Nature as well as the initial conditions of the Universe (to both of which the emergence of life is allegedly exquisitely sensitive) are similarly variable. This may or may not be the case; the present state of science is not advanced enough to decide between chance and necessity concerning the laws of nature and the beginning of the Universe.Footnote22

Second, granted that the ‘constants’ etc. are variable in principle (in the sense that values other than the current ones preserve the existence and consistency of the theories in which they occur), it is quite unclear to what extent they can vary and which variations may be regarded as ‘small’; yet the FTA relies on the assumption that even ‘small’ variations would block the emergence of life (Manson 2000). In the absence of such information, it would be natural to assume that any (real, positive as appropriate) value may be assumed, but in that case mathematical probabilistic reasoning (which is necessary for the FTA in order to say that the current values are ‘unlikely’) turns out to be impossible (McGrew et al. 2001; Colyvan et al. 2005; Koperski 2005).Footnote23 But also if a large but finite number of values (per constant or initial condition) needs be taken into account, it is hard to assign any kind of probability to any of the alternative values; even the assumption that each values is equally likely seems totally arbitrary (Everitt 2004; Norton 2010).

Nonetheless, these problems may perhaps be overcome and in any case, for the sake of argument we will continue to use the metaphors opening this section.

Critiquing the Inference of Design from Fine-Tuning

The idea that cosmic fine-tuning originates in design by something like an intelligent Creator fits into a long-standing Judeo-Christian tradition, where both the Cosmos and biology were explained in that way.Footnote24 Now that biology has yielded to the theory of Evolution proposed by Darwin and Wallace in the mid 19th Century,Footnote25 the battleground has apparently moved back to the cosmos. Also there, Design remains a vulnerable idea.Footnote26 For the sake of argument we do not question the coherence of the idea of an intelligent Creator as such, although such a spirit seems chimerical (Everitt 2004; Philipse 2012).

First, in slightly different ways Smith (1993) and Barnes (2012) both made the point that the FTA does not claim, or support the conclusion, that the present Universe is optimal for intelligent life. Indeed, it hardly seems to be: even granted all the fine-tuning in the world as well as the existence of our earth with its relatively favorable conditions (Ward and Brownlee 2000), evolution has been walking a tightrope so as to produce as much as jellyfish, not to speak of primates (Dawkins 1996). This fact alone casts doubt on the FTA as an Argument of Design, for surely a benign Creator would prefer a Universe optimal for life, rather than one that narrowly permits it? From a theistic perspective it would seem far more efficient to have a cosmic architecture that is robust for its designated goal.

Second, the inference to Design from the FTA seems to rest on a decisive tacit assumption whose exposure sustantially weakens this inference (Bradley 2001). The cards analogy presupposes that there was such a thing as a canonical order; if there weren’t, then any particular outcome would be thought of in the same way and would of course be attributed to chance. Similarly, the dice metaphor presupposes that it is special for 6 to come up every single time; probabilistically speaking, every other outcome would have been just as (un)likely as the given sequence of sixes.Footnote27 An then again, in the case of independently tunable constants of Nature and/or initial conditions, one (perhaps approximate) value of each of these must first be marked with a special label like ‘life-permitting’ in order for the analogy with cards or dice (and hence the appeal of the FTA) to work. The FTA is predicated on such marking, which already presupposes that life is special.

It is irrelevant to this objection whether or not life is indeed special; the point is that the assumption that life be special has to be made in addition to the FTA in order to launch the latter on track to Design. But the inference from the (assumed) speciality of life to Design hardly needs the FTA: even if all values of the constants and initial conditions would lead to a life-permitting Universe, those who think that life is special would presumably point to a Creator. In fact, both by the arguments recalled at the beginning of this section and those below, their case would actually be considerably stronger than the FTA.

In sum, fine-tuning is not by itself sufficient as a source for an Argument of Design; it is the combination with an assumption to the effect that life is somehow singled out, preferred, or special. But that assumption is the one that carries the inference to Design; the moment one makes it, fine-tuning seems counter-productive rather than helpful.

Attempts to give the Design Argument a quantitative turn (Swinburne 2004; Collins 2009) make things even worse (Bradley 2002; Halvorson 2014). Such attempt are typically based on Bayesian Confirmation Theory. This is a mathematical technique for the analysis and computation of the probability P (H|E) that a given hypothesis H is true in the light of certain evidence E (which may speak for or against H, or may be neutral). Almost every argument in Bayesian Confirmation Theory is ultimately based on Bayes’ Theorem

𝑃(𝐻|𝐸)=𝑃(𝐸|𝐻)⋅𝑃(𝐻)/𝑃(𝐸),

where P(H|E) is the probability that E is true given the truth of the hypothesis H, whilst P(H) and P(E) are the probabilities that H and E are true without knowing E and H, respectively (but typically assuming certain background knowledge common to both H and E, which is very important but has been suppressed from the notation).Footnote28

In the case at hand, theists want to argue that the Universe being fine-tuned for Life makes Design more likely, i.e., that P(D|L) > P (D), or, equivalently, that P(L|D) > P(L) (that is, Design favors life). The problem is that theists do not merely ask for the latter inequality; what they really believe is that P(L|D) ≈ 1, for the existence of God should make the emergence of life almost certain.Footnote29 For simplicity, first assume that P(L|D) = 1. Bayes’ Theorem then gives P(D|L) = P(D)/P (L), whence P(D) ≤ P(L). More generally, assume P(L|D) ≥ 1/2, or, equivalently, 𝑃(𝐿|𝐷)≥𝑃(¬𝐿|𝐷), where ¬L is the proposition that life does not exist. If (D,L) is the conjunction of D and L, we then have

𝑃(𝐷)=𝑃(𝐷,𝐿)+𝑃(𝐷,¬𝐿)≤2𝑃(𝐷,𝐿)≤2𝑃(𝐿),

since 𝑃(𝐷,¬𝐿)≤𝑃(𝐷,𝐿) by assumption. Thus a negligible prior probability of life (on which assumption the FTA is based!) implies a hardly less negligible prior probability of Design. This inequality make the Argument from Design self-defeating as an explanation of fine-tuning, but in any case, both the interpretation and the numerical value of P(D) are so obscure and ill-defined that the whole discussion seems, well, scholastic.

Critiquing the Inference of a Multiverse from Fine-Tuning

The idea of Design may be said to be human-oriented in a spiritual way, whereas the idea of a Multiverse more technically hinges on the existence of observers, as expressed by the so-called (weak) Anthropic Principle (Barrow and Tipler 1986; Bostrom 2002). The claim is that there are innumerable Universes (jointly forming a ‘Multiverse’), each having its own ‘constants’ of Nature and initial conditions, so that, unlikely as the life-inducing values of these constants and conditions in our Universe may be, they simply must occur within this unfathomable plurality. The point, then, is that we have to observe precisely those values because in other Universes there simply are no observers. This principle has been labeled both ‘tautological’ and ‘unscientific’. Some love it and some hate it, but we do not need to take sides in this debate: all we wish to do is find out whether or not the FTA speaks in favour of a Multiverse, looking at both an explanatory and a probabilistic level. Thus the question is whether the (alleged) fact of fine-tuning is (at least to some extent) explained by a Multiverse, or if, in the context of Bayesian confirmation theory, the evidence of fine-tuning increases the probability of the hypothesis that a Multiverse exists.Footnote30 To get the technical discussion going, the following metaphors have been used:

  • Rees’s ‘off the shelf’ clothes shop has already been mentioned in the Introduction: if someone enters a shop that sells suits in one size only (i.e., a single Universe), it would be amazing if it fitted (i.e., enabled life). However, if all sizes are sold (in a Multiverse, that is), the client would not at all be surprised to find a suit that fits.

  • Leslie’s (1989) firing squad analogy states that someone should be executed by a firing squad, consisting of many marksmen, but they all miss. This amounts to fine-tuning for life in a single Universe. The thrust of the metaphor arises when the lucky executee is the sole survivor among a large number of other convicts, most or all of whom are killed (analogously to the other branches of the Multiverse, most or all of which are inhospitable to life). The idea is that although each convict had a small a priori probability of not being hit, if there are many of them these small individual probabilities of survival add up to a large probability that someone survives.

  • Bradley (20092012) considers an urn that is filled according to a random procedure:

    • If a coin flip gives Heads (corresponds to a single Universe), either a small ball (life) or a large one (no life) is entered (depending on a further coin flip).

    • In case of Tails (modeling a ‘Binaverse’ for simplicity), two balls enter the urn, whose sizes depend on two further coin flips (leaving four possibilities).

Using a biased drawing procedure that could only yield either a small ball or nothing, a small ball is obtained (playing the role of a life-enabling Universe). A simple Bayesian computation shows that this outcome confirms Tails for the initial flip.

Each of these stories is insightful and worth contemplating. For example, the first one nicely contrasts the Multiverse with Design, which would correspond to bespoke tailoring and hence, at least from a secular point of view, commits the fallacy of putting the customer (i.e., life) first, instead of the tailor (i.e., the Universe as it is). The Dostoyevskian character of the second highlights the Anthropic Principle, whose associated selection effects (Bostrom 2002) are also quantitatively taken into account by the third.

Nonetheless, on closer inspection each is sufficiently vulnerable to fail to clinch the issue in favour of the Multiverse. One point is that although each author is well aware of (and the second and the third even respond to) the Inverse Gambler’s Fallacy (Hacking 1987),Footnote31 this fallacy is not really avoided (White 2000). In its simplest version, this is the mistake made by a gambler who enters a casino or a pub, notices that a double six is thrown at some table, and asks if this is the first roll of the evening (his underlying false assumption being that this particular outcome is more likely if many rolls preceded it). Despite claims to the contrary (Leslie 1988; Manson and Thrush 2003; Bradley 20092012), Hacking’s analysis that this is precisely the error made by those who favor a Multiverse based on the FTA in our opinion still stands. For example, in Rees’ analogy of the clothes shop, what needs to be explained is not that some suit in the shop turns out to fit the customer, but that the one he happens to be standing in front of does. Similarly, the probability that a given executee survives is independent of whoever else is going to be shot in the same round. And finally, the relevant urn metaphor is not the one described above, but the one in which Tails leads to the filling of two different urns with one ball each. Proponents of a Multiverse correctly state that its existence would increase the probability of life existing in some Universe,Footnote32 but this is only relevant to the probability of life in this Universe if one identifies any Universe with the same properties as ours with our Universe.Footnote33 Such an identification may be suggested by the (weak) Anthropic Principle, but its is by no means implied by it, and one should realize that the inference of a Multiverse from the FTA implicitly hinges on this additional assumption.Footnote34

Moving from a probabilistic to an explanatory context, we follow Mellor (2002) in claiming that if anything, a Multiverse would make fine-tuning even more puzzling. Taking the firing squad analogy, there is no doubt that the survival of a single executee is unexpected, but the question is whether it may be explained (or, at least, whether it becomes less unexpected) by the assumption that simultaneously, many other ‘successful’ executions were taking place. From the probabilistic point of view discussed above, their presence should have no bearing on the case of the lone survivor, whose luck remains as amazing as it was. From another, explanatory point of view, it makes his survival even more puzzling, since we now know from this additional information about the other executions that apparently the marksmen usually do kill their victims.

Conclusion

Already the uncontroversial examples that feed the FTA suffice to produce the fascinating insight that the formal structure of our current theories of (sub)nuclear physics and cosmology (i.e., the Standard Model of particle physics and Einstein’s theory of General Relativity) is insufficient to predict the rich phenomenology these theories give rise to: the precise values of most (if not all) constants and initial conditions play an equally decisive role. This is a recent insight: even a physicist having the stature of Nobel Laureate Glashow (1999, p. 80) got this wrong, having initially paraphrased the situation well:

“Imagine a television set with lots of knobs: for focus, brightness, tint, contrast, bass, treble, and so on. The show seems much the same whatever the adjustments, within a large range. The standard model is a lot like that.

Who would care if the tau lepton mass were doubled or the Cabibbo angle halved? The standard model has about 19 knobs. They are not really adjustable: they have been adjusted at the factory. Why they have their values are 19 of the most baffling metaquestions associated with particle physics.”

In our view, the insight that the standard model is not like that at all is the real upshot of the FTA.Footnote35 Attempts to draw further conclusions from it in the direction of either Design or a Multiverse are, in our opinion, unwarranted. For one thing, as we argued, at best they fail to have any explanatory or probabilistic thrust (unless they rely on precarious additional assumptions), and at worst fine-tuning actually seems to turn against them.

Most who agree with this verdict would probably feel left with a choice between the options of Blind Chance and Blind Necessity; the present state of science does not allow us to make such a choice now (at least not rationally), and the question even arises if science will ever be able to make it (in a broader context), except perhaps philosophically (e.g., à la Kant). However, we would like to make a brief case for the fifth position, stating that the fine-tuning problem is misguided and that all we need to do is to clear away confusion.

There are analogies and differences between cosmic fine-tuning for life through the laws of Nature and the initial conditions of the Universe, as discussed so far, and Evolution in the sense of Darwin and Wallace. The latter is based on random (genetic) variation, survival of the fittest, and heritability of fitness. All these are meant to apply locally, i.e., to life on Earth. We personally feel that arguments to extend these principles to the Universe in the sense that the Cosmos may undergo some kind of ‘biological’ evolution, having descendants born in singularities, perhaps governed by different laws and initial conditions (some of which, then, might be ‘fine-tuned for life’, as in the Multiverse argument), as argued by e.g., Wheeler (in Ch. 44 of Misner et al. 1973) and Smolin (1997), imaginative as they may be, are too speculative to merit serious discussion. Instead, the true analogy seems to be as follows: as far as the emergence and subsequent evolution of life are concerned, the Universe and our planet Earth should simply be taken as given. Thus the fundamental reason we feel ‘fine-tuning for life’ requires no explanation is thisFootnote36:

Our Universe has not been finetuned for life: life has been finetuned to our Universe.