Neurofeedback: Can you regulate your own brain to cure depression?

As happens not infrequently, the other day I was shown an article on brain research from: “Brain Training for Anxiety, Depression and Other Mental Conditions”. Initially, I simply refused to read it, but this was met with a verbal description of the article that continued until I said “fine, I’ll read it and write a blog post about why it’s wrong.” This, O Best Beloved, is that blog post.

The WSJ article concerned something called “neurofeedback”, and focused mainly on the study “Real-Time fMRI Neurofeedback Training of Amygdala Activity in Patients with Major Depressive Disorder”. In short, neurofeedback involves some kind of neuroimaging equipment that allows participants to see a measure of their brain function in real time and attempt to alter it (e.g., increase or decrease neural activity in some region). This is done via different cognitive methods, such as focusing on certain memories, and then seeing (in real time) whether the desired changes in brain function occur. Many claims made in the WSJ article are consistent with many made in the research literature, including the paper linked to above. That is, many researchers claim that neurofeedback methods can help with a variety of mental health issues.

However, according to the article, neurofeedback is superior to other therapies (e.g., medication or psychotherapy) because it doesn’t have the side-effects of pills and it “directly targets the brain dysfunctions and emotional and cognitive processes that are understood to underlie psychiatric disorders.” First, it doesn’t “directly target brain dysfunctions” at all. Nobody is really sure what “brain dysfunctions” underlie any mental disorders (i.e., those in the DSM-V), and some don’t think any exist. Second, to the extent we have identified brain function correlated with particular mental disorders like depression (MDD), medication is vastly more direct than neurofeedback. In fact, neurofeedback doesn’t differ that much from talk therapy. It’s been known for years that simple changes in one’s thinking, from therapy to mental imagery, change brain function (see e.g., How psychotherapy changes the brain – the contribution of functional neuroimaging; Mind does really matter: Evidence from neuroimaging studies of emotional self-regulation, psychotherapy, and placebo effect; & “Change the mind and you change the brain”: effects of cognitive-behavioral therapy on the neural correlates of spider phobia).

Finally, it’s highly questionable whether or not neurofeedback is anything more than a placebo effect. Most studies (like the one the article describes) don’t use adequate controls and are instead more concerned with “proof-of-concept”. This study used better controls than many, but even the authors admit their sample sizes were too small (and the control group, n=7, was half the size of the experimental group). Also, the differences between the two groups weren’t that significant. Once again, (null) hypothesis significance testing raised its ugly head, telling us that even though both groups seemed to have improved across the board just by focusing on happy memories, the slightly higher improvement in some measures in the experimental legitimizes calling these significant (and not the improvements experienced by the control group.

Unlike more traditional neurofeedback research, many modern studies (including the one in question) use a new fMRI technique. Most fMRI studies, such as those I’ve performed, don’t provide any immediate feedback signal suitable of the type needed for neurofeedback methods. Instead, you get hundreds of small, black-and-white photos that tell you nothing much apart from whether or not the participant is moving around too much, screwing up the imaging. All the nice, neat pictures with colors to indicate “significant” activity are added later using signal processing and statistical methods. However, more recently “real-time” fMRI (rtfMRI) has come to play a major role in neurofeedback research, as it allows “real-time” feedback information on the effect of cognitive processes on brain activity. However, despite the increased promise provided by this new method, there is little evidence to warrant the optimism found in the WSJ article. A great deal of interest was sparked by deCharms and co-authors, and in particular a 2005 study “Control over brain activation and pain learned by using real-time functional MRI”. However: “In 2005, deCharms et al. published an fMRI-nf study that employed a careful design and reported robust findings, sparking enthusiasm for this seemingly promising technique…This well-controlled study remains the strongest piece of evidence supporting fMRI-nf as an effective tool for self-regulating the brain and improving clinical conditions. However, the impact of this one promising study has become shrouded by decade-long skepticism; question marks have turned into exclamation points after a string of independent replication efforts, including by the original authors, was unable to corroborate the reported findings” (emphasis added; a draft of the actual, peer-reviewed paper may be found here).

A better understanding of the promises of neurofeedback for depression and mental disorders more generally can be found in this conclusion from a 2016 literature review: “While neurofeedback appears to help some participants gain the capacity for brain modulation, the relative contribution of specific feedback compared to ulterior factors remains unclear. At the moment, sparse behavioral measures, little follow-up sessions, and many methodological caveats preclude formal endorsement of neurofeedback as a clinical treatment vehicle. Although the jury is still out, additional judicious experiments and more compelling findings will have to further demonstrate the seductive, albeit yet unconfirmed, clinical promise of neurofeedback.” (The self-regulating brain and neurofeedback: Experimental science and clinical promise).

Posted in Neuroscience | Tagged , , , , , , , | Leave a comment

Quantum Cognition: Physics Envy in Neuroscience and Psychology

The idea that quantum physics is not only relevant to consciousness or the “mind” is pretty widely known. After all, in addition to a plethora of popular books by authors with questionable expertise and/or knowledge, eminent physicists such as Sir Roger Penrose and Henry Stapp have supported this idea. This post isn’t about quantum theories of mind (which I don’t find persuasive). It’s about a large number of papers that begin by making this distinction, e.g., “We note that this article is not about the application of quantum physics to brain physiology.”; “In our approach “quantumness of mind” has no direct relation to the fact that the brain (as any physical body) is composed of quantum particles”; etc.

Simply put, the idea is that cognitive psychologists, neuroscientists, etc., should use the mathematical framework, notation, and terminology found in quantum physics (in particular, quantum mechanics) to model thinks like decisions, opinions, attitudes, and other cognitive processes or mental states. The reason? Well, it doesn’t take much thought to realize that e.g., the mood “happy” isn’t binary. That is, a person isn’t simply either “happy” or “not happy” but rather there are degrees of happiness and sadness. The same is true of things like political orientation, value judgments, and in general all the kinds of things that psychologists and neuroscientists studying the mind are interested in measuring. Also, not only are most of these mental states and cognitive processes not binary or discrete, they also don’t really lie along a continuum. For example, a person can be happy-excited, or happy-euphoric, or happy-content. So a person’s mental “state” with respect to some attribute, mood, etc., is really more like a composite of indistinguishable states (none of which can be simply encapsulated by a true/false or yes/no binary method).

Many people who have never opened a physics texts have nonetheless heard of quantum superposition. The sensationalist, simplistic version of this phenomena is that a quantum system can exist in multiple distinct states at the same time (for example, it can be described as being in more than one place at once, as moving in incompatible ways, etc.). As in classical physics (and biology, chemistry, and so forth) systems are modelled using mathematics, which means that in quantum mechanics the mathematics must be able to represent systems in superposition states and must “work” according to the logic of quantum mechanics (which, unlike classical logic, can allow a statement to be true and false at once, and other fundamental paradoxes).

Thus dozens and dozens of papers in many journals written by many different authors have argued that the mathematical formalisms of quantum mechanics should be used to represent things like attitudes in cognitive neuroimaging studies, behavioral studies, and other research on cognition. For example:

“Superposition, entanglement, incompatibility, and interference are all related aspects of QP theory, which endow it with a unique character. Consider a cognitive system, which concerns the cognitive representation of some information about the world… Questions posed to such systems (“Is Linda feminist?”) can have different outcomes (e.g., “Yes, Linda is feminist”). Superposition has to do with the nature of uncertainty about question outcomes. The classical notion of uncertainty concerns our lack of knowledge about the state of the system that determines question outcomes. In QP theory, there is a deeper notion of uncertainty that arises when a cognitive system is in a superposition among different possible outcomes. Such a state is not consistent with any single possible outcome”

Pothos, E. M., & Busemeyer, J. R. (2013). Can quantum probability provide a new direction for cognitive modeling?. Behavioral and Brain Sciences, 36(03), 255-274.

Even better:

“There is one obvious similarity between cognitive science and quantum physics: both deal with observations that are fundamentally probabilistic. This similarity makes the use of QT in cognitive science plausible, as QT is specifically designed to deal with random variables. Here, we analyze the applicability of QT in opinion-polling, and compare it to psychophysical judgments.”

Khrennikov, A., Basieva, I., Dzhafarov, E. N., & Busemeyer, J. R. (2014). Quantum Models for Psychological Measurements: An Unsolved Problem. PLoS One, 9(10).

It may sound as if this idea of describing attitudes or reasoning in terms of quantum theory for the reasons given is natural, even necessary. After all, whatever the biological mechanisms underlying cognitive processes, experiments on cognition can’t get very far if they are limited to what can be explained by neurobiology. So we have to measure beliefs, attitudes and similar mental states and processes by asking participants questions, and the outcome is fundamentally probabilistic. Also, there is something like the uncertainty principle at play, in that measurements will always involve uncertainty (including exactly what is being measured).

There’s one little problem: there’s nothing special about the mathematics used in quantum mechanics (in fact, the mathematical formalisms of quantum mechanics is incompatible with special relativity, so field theories like quantum electrodynamics rely on a rather fundamentally different mathematical framework). It’s true that quantum mechanics utilizes some unique notation, called Dirac notation, that one can now find strewn across papers on mathematical psychology, cognitive science, social psychology, etc. It’s also true that this notation was developed specifically for use in quantum physics. But were there alternative mathematical formalisms prior to Dirac’s creation? Sure. Is Dirac notation superior? That depends: “Mathematicians tend to despise Dirac notation, because it can prevent them from making important distinctions, but physicists love it, because they are always forgetting that such distinctions exist and the notation liberates them from having to remember.” (N. David Mermin). I hated Dirac notation because I was already quite familiar with the more powerful and more widely use notation for complex vector spaces, matrices, Hilbert space, and the other concepts that Dirac notation is used to represent (systems in quantum mechanics are represented by vectors in Hilbert space; if you don’t know what that means don’t worry about it). The new notation encouraged things which I had been repeatedly warned in textbooks and by professors never to do (e.g., writing a vector as a row) because such notational violations can make mathematical operations produce incorrect results, confuse coefficients with their variables, and in general result in a mess. However, because quantum mechanics doesn’t require the full power of abstract algebras and functional spaces physicists can get by without the kind of rigor mathematicians demand. The problem is that this notation was developed specifically to fit the kinds of measurement outcomes and experiments of quantum mechanics, not simply uncertainty, superposition, probabilistic outcomes, etc.

In fact, the statement that both “cognitive science and quantum physics…deal with observations that are fundamentally probabilistic” is idiotic. Statistical mechanics is “fundamentally probabilistic”, and so is (surprise!) probability theory. The only thing special about probability in quantum mechanics is that they can’t be calculated directly but are derived from wave amplitudes (things can get more complicated, but probability is still probability). There isn’t some quantum normal distribution. Quantum mechanics still makes use of the familiar ol’ bell-shaped curves used for well over a hundred years. Hilbert space, so fundamental to quantum mechanics (quantum states “exist” in this space), was developed by a mathematician for mathematics and continues to be used by mathematicians.

So what is behind the papers cited below as examples of this trend? Psychologists trying to act as though they’re doing physics and the trendiness of “quantum weirdness”, i.e., a desire to be just as hardcore and cool as theoretical physicists. And here are some of the pointless results from this physics envy (you have to love the prima facie ridiculousness of the paper arguing that the “mental lexicon” displays quantum-like “spooky-action-at-a-distance”) :

Aerts, D. (2009). Quantum structure in cognition. Journal of Mathematical Psychology, 53(5), 314-348.

Aerts, D., Broekaert, J., & Gabora, L. (2011). A case for applying an abstracted quantum formalism to cognition. New Ideas in Psychology, 29(2), 136-146.

Ashtiani, M., & Azgomi, M. A. (2015). A survey of quantum-like approaches to decision making and cognition. Mathematical Social Sciences, 75, 49-80.

De Barros, J. A. (2012). Quantum-like model of behavioral response computation using neural oscillators. Biosystems, 110(3), 171-182.

Bruza, P., Kitto, K., Nelson, D., & McEvoy, C. (2009). Is there something quantum-like about the human mental lexicon?. Journal of Mathematical Psychology, 53(5), 362-377.

Busemeyer, J. R., & Bruza, P. D. (2012). Quantum Models of Cognition and Decision. Cambridge University Press.

Khrennikov, A., Basieva, I., Dzhafarov, E. N., & Busemeyer, J. R. (2014). Quantum Models for Psychological Measurements: An Unsolved Problem. PLoS One, 9(10).

Pothos, E. M., & Busemeyer, J. R. (2013). Can quantum probability provide a new direction for cognitive modeling?. Behavioral and Brain Sciences, 36(03), 255-274.

Pothos, E. M., Busemeyer, J. R., & Trueblood, J. S. (2013). A quantum geometric model of similarity. Psychological review, 120(3), 679.

Wang, Z., Busemeyer, J. R., Atmanspacher, H., & Pothos, E. M. (2013). The potential of using quantum theory to build models of cognition. Topics in Cognitive Science, 5(4), 672-688.

Wang, Z., Solloway, T., Shiffrin, R. M., & Busemeyer, J. R. (2014). Context effects produced by question orders reveal quantum nature of human judgments. Proceedings of the National Academy of Sciences, 111(26), 9431-9436.

Posted in Cognitive Science, Neuroscience, Physics | Tagged , , , , , , , , , | 1 Comment

Photons aren’t real (but virtual photons are!)

My concern here is mainly with the paper

Kastner, R. E. (2014). On Real and Virtual Photons in the Davies Theory of Time-Symmetric Quantum Electrodynamics. Electronic Journal of Theoretical Physics, 11(30).

I should say at the outset that the sensationalist title of this post is to compensate for a corresponding lack of sensationalism in the post, and should not be interpreted as a disparaging (or even negative) view of Kastner’s article. I may not be a proponent of transactional interpretations of quantum physics, but neither am I a detractor (and I certainly find it more plausible than multiverse-type interpretations). Before discussing photons and virtual photons, I need to briefly explain the “Possibilist Transactional Interpretation” (PTI). This is Kastner’s own version of the “Transactional Interpretation” (TI) put forward by Dr. John G. Cramer in the 80s. Luckily, this means that there already exists sufficiently simple and concise summaries of the interpretation that PTI is based upon. See e.g.,
Summary of the Transactional Interpretation
Differences between the Transactional and the Copenhagen Interpretations (both of these are sections from the same paper by Cramer)
&
Cramer, J. G. (1986). The transactional interpretation of quantum mechanics. Reviews of Modern Physics, 58(3), 647.
Even better, a central (perhaps the central) aspect of the PTI can be understood reasonably well at least for our purposes by understanding the possibilist part without the TI foundation. In Kastner’s words:

I wish to view as physically real the possible quantum events that might be, or might have been, experienced. So, in this approach, those possible events are real, but not actual; they exist, but not in spacetime. The actual event is the one that is experienced and that can be said to exist as a component of spacetime. I thus dissent from the usual identification of “physical” with “actual”: an entity can be physical without being actual.

Kastner, R. E. (2012). The Transactional Interpretation of Quantum Mechanics: The Reality of Possibility. Cambridge University Press.

To many this probably sounds strange or even nonsensical (to those for whom it doesn’t, either you are a physicist or should consult a mental health professional). How on earth can a possibility be physically real, especially if it isn’t actual? Well, it is strange and seemingly nonsensical, but that’s not Kastner’s fault, it’s the universe’s fault (I know, classic “blame it on the cosmos” or “dog ate my homework” excuse we hear so often, but its true!). This is where virtual particles, specifically virtual photons, come in.

In quantum electrodynamics (and quantum field theory and particle physics more generally), “real” particles (or processes) are said to be governed by “virtual” ones. These irritating entities are rather poorly behaved. They will often refuse (without giving any good reason) to adhere to conservation laws, and worse may even decide to snub causality just for kicks!

But if they’re not real, then what are they doing in theories of physics? Can’t we just stick to things that exist and worry about what photons are doing without wondering what virtual ones “do”? (actually, technically we sometimes can, in that certain extensions of quantum mechanics may eliminate them, as with photons in quantum field theory, but in QFT/particle physics even if virtual photons are eliminated by weak gauge conditions, we’ll still have virtual gluons or virtual neutrinos, etc.)

A (very simplistic) way to think of virtual particles is that they are virtual because they can’t be directly observed, but are used to explain real, physical effects on what can be. For photons more specifically, we turn to the main paper of interest:

the situation has a much more natural explanation in the transactional picture. In that picture, the response of the absorber is what gives rise to the ‘free field’ that in the quantum domain is considered a ‘real photon’. So the ‘realness’ of the photon is defined in the transactional picture not by an infinite lifetime – which, in reality, is practically never obtained – but rather by the presence of an absorber response. This response is what can give rise (through an actualized transaction) to a real photon with the ability to convey empirically detectable energy from one place to another, which is the function of the free field. That is, the idea that a ‘real’ photon must always have exactly zero rest mass is an idealization.

My goal is now to show how this means that the idea of a physically real possibility isn’t as bizarre as it sounds in comparison with mainstream, basically universally accepted concepts from modern physics. In fact, it could be argued (and is, actually, by Kastner) that describing possibilities as real is better than describing rather fundamental interactions between the basic constituents of reality as imaginary, virtual, or otherwise not “real.” After all, as Richard Feynman says in his Theory of Fundamental Processes (p. 95), “In a sense every real photon is actually virtual if one looks over sufficiently long time scales.” Is it really better to make a distinction which allows an arbitrary distinction between what is and what isn’t real in physics than to treat entities in the mathematical framework that are said to mediate physical interactions or be otherwise causally efficacious as “real”? Kastner, in a certain sense, is just flipping the reasoning behind the real/virtual distinction on its head. All photons are real, virtual or no, and all began as virtual photons. The difference is that “real” photons are “actualized” or “realized” possibilities, and virtual photons are not:

The virtual photon, in the transactional picture, is just this nascent possibility of a transaction between two currents—but one that was not realized. A transaction is only attainable for virtual photons that satisfy the energy and momentum conservation constraints for the initial and final states of the system.” (italics in original).

More generally, PTI addresses a serious ontological problem in many interpretations of quantum physics, not to mention the language used in physics literature and discourse. Quantum theory presented (and still does) serious challenges for our notions of what is physical, real, material, etc. In mainstream quantum physics, the relationship between mathematical representations of physical systems and processes is frequently anything but clear. Yet it has been so long since quantum mechanics first begat an epistemological crises that there exists a certain amount of numbness to it. Most physicists today were introduced to at least some quantum mechanics as undergraduates and simply became accustomed to the divide between the formalisms and the physics that posed such problems (and not infrequently contradictory “solutions”) for Planck, Whitehead, Born, Einstein, Bohr, Popper, Heisenberg, and other foundational figures in physics and 20th century philosophy. Otherwise confusing descriptions of virtual particles, inserted into mathematical equations to make physical models work correctly (thereby rendering them causally efficacious), became old hat. Being accustomed to such issues doesn’t solve them, though. Kastner’s paper is an example of one solution to such a problem.

I haven’t done justice to Kastner’s paper, and have done much injustice to PTI, mostly because I wanted to introduce an example of the ontological issues in modern physics and a fascinating resolution with a specific example, but this isn’t easy to do without either a post much longer than this filled with explanations or an assumption of a degree of familiarity with QED that I can’t make. The title of this post is misleading, but I thought that fitting given the misleading nature of physics language. And that is really the point: it takes a fairly technical level of familiarity with quantum physics beyond the level of quantum mechanics just to be familiar with how misleading terms like “particle”, “state”, “observable”, etc., are, and still more to become familiar with what they actually mean. It is easier, then, to point out examples like this one of such issues than to provide the detail necessary for an individual without the requisite background to render an informed opinion. For that, a good start would be Kastner’s book (cited above), which I highly recommend. She has another book that I believe is also on PTI, but I haven’t read it…yet.

Posted in Philosophy, Physics | Tagged , , , , , , | 1 Comment

Review of “Anthropic Bias Observation Selection Effects in Science and Philosophy”

The Book: What it isn’t

The reviewed work here is the book Anthropic Bias: Observation Selection Effects in Science and Philosophy by Dr. Bostrom. Although it covers topics like multiverse cosmology and the anthropic principle, it differs in several ways from most books that deal with these subjects. First, it is not filled with equations and formulae as is Barrow & Tipler’s (in)famous The Anthropic Cosmological Principle nor is it sensationalist and overly simplistic like Krauss’ A Universe From Nothing or Schroeder’s The Hidden Face of God. It’s a work of scholarship, published by Routlege (an academic publishing company) as part of the series Studies in Philosophy. It’s not the kind of book you’ll find in bookstores nor would most, I think, find it light reading. Second, it deals with multiverse theory and the anthropic principle secondarily. The book is really a treatise on the best way to approach a particular kind of problem that we face in the sciences but are actually more likely to encounter in popular discourse. To illustrate the kind of problem this book concerns (observation selection effect bias), I’ll give two examples, one simple and the other simplistically summarized.

Anthropic Bias: Examples of bias from observation selection effects

Example 1: Extraterrestrial intelligent life must exist

This is one of several examples that Bostrom gives in his introduction, but I choose it because I have addressed this specific question here and I have found myself trying to explain the problems involved to an almost inevitably skeptical audience. Many people believe that even if it is incredibly unlikely for life to develop on any given planet, there must be a huge number of planets with life, including intelligent life, because there exists an astronomically (bad pun intended) large number of planets in the universe. Moreover, there are many of them even relatively nearby that are “Earth-like” and found in what astrobiologists, among others, call “habitable zones” (HZs). And after all, what are the chances that Earth is the only planet in the entire universe on which complex (including microscopic, multicellular organisms) or intelligent life arose?

Well, this final question approaches the right way to think about this issue. To estimate how many planets have life, all we need to do is take the number of favorable outcomes (planets with complex life) and divide by all the planets. Simple. Of course, if knew of any other planet with complex life, we wouldn’t be asking this question. But I don’t want to present my approach to this problem (although it is similar, in form and conclusion, to Bostrom’s). I want to give his:

“Let’s look at an example where an observation selection effect is involved: We find that intelligent life evolved on Earth. Naively, one might think that this piece of evidence suggests that life is likely to evolve on most Earth-like planets. But that would be to overlook an observation selection effect. For no matter how small the proportion of all Earth-like planets that evolve intelligent life, we will find ourselves on a planet that did (or we will trace our origin to a planet where intelligent life evolved, in case we are born in a space colony). Our data point—that intelligent life arose on our planet—is predicted equally well by the hypothesis that intelligent life is very improbable even on Earth-like planets as by the hypothesis that intelligent life is highly probable on Earth-like planets…
The impermissibility of inferring from the fact that intelligent life evolved on Earth to the fact that intelligent life probably evolved on a large fraction of all Earth-like planets does not hinge on the evidence in this example consisting of only a single data point. Suppose we had telepathic abilities and could communicate directly with all other intelligent beings in the cosmos. Imagine we ask all the aliens, did intelligent life evolve on their planets too? Obviously, they would all say: Yes, it did. But equally obvious, this multitude of data would still not give us any reason to think that intelligent life develops easily. We only asked about the planets where life did in fact evolve (since those planets would be the only ones which would be “theirs” to some alien), and we get no information whatsoever by hearing the aliens confirming that life evolved on those planets (assuming we don’t know the number of aliens who replied to our survey or, alternatively, that we don’t know the total number of planets). An observation selection effect frustrates any attempt to extract useful information by this procedure.”

Example 2: The Anthropic Principle

The anthropic principle is usually divided into classes (especially strong and weak) and is highly nuanced, so I will just keep things simple by approaching it as sort of an inverse of example 1. In the first example, we looked at the flawed reasoning that leads to the idea that complex life is likely abundant in the universe. The most frequent arguments involve a flawed inference from the fact that life arose here to how likely it is to arise elsewhere, because knowing only that it arose here is consistent both with the hypothesis that life arose only on Earth and the hypothesis that complex life is abundant in the universe.

Not long ago, back when Neil deGrasse Tyson was Carl Sagan, scientists in general had pretty high hopes for the Search for Extra-Terrestrial Intelligence (SETI). To some extent that hasn’t changed, but a combination of the complete failure of SET and an increased understanding of the sheer number of variables that have to be just for life to arise and evolve have prompted many scientists working in astrobiology to conclude that complex life is probably rare, that if intelligent life exists elsewhere we’re never going to know, or even that we are alone. The arguments for and against these and other beliefs about life in the universe are for another time. Here, I simply want to introduce a very simple definition of the anthropic principle as it is relevant here:

“The anthropic principle is the name given to the observation that the physical constants in the cosmos are remarkably finely tuned, making it a perfect place to host intelligent life. Physicists offer a “many-worlds” explanation of how and why this might be the case.
My feeling is that a misanthropic principle could also be applicable. I use this term to express the idea that the possible environments and biological opportunities in this apposite cosmos are so vast, varied and uncooperative (or hostile), either always or at some time during the roughly 3-to-4 billion years intelligent life requires to emerge, that it is unlikely for intelligence to form, thrive and survive easily.” (Alone in the Universe)

Because there are so many fundamental “parameters” (e.g., the cosmological constant, the four fundamental forces, etc.) don’t just appear to allow for life, but are instead “remarkably finely tuned” for it. Again, I don’t want to introduce too much of my take here so to quote from Bostrom:

“Another example of reasoning that invokes observation selection effects is the attempt to provide a possible (not necessarily the only) explanation of why the universe appears fine-tuned for intelligent life in the sense that if any of various physical constants or initial conditions had been even very slightly different from what they are then life as we know it would not have existed. The idea behind this possible anthropic explanation is that the totality of spacetime might be very huge and may contain regions in which the values of fundamental constants and other parameters differ in many ways, perhaps according to some broad random distribution. If this is the case, then we should not be amazed to find that in our own region physical conditions appear “fine-tuned”. Owing to an obvious observation selection effect, only such fine-tuned regions are observed. Observing a fine-tuned region is precisely what we should expect if this theory is true, and so it can potentially account for available data in a neat and simple way, without having to assume that conditions just happened to turn out “right” through some immensely lucky—and arguably a priori extremely improbable—cosmic coincidence.”

Popular Physics: What this book isn’t

Paul Davies is a physicist and author of a number of popular science books, including The Goldilocks Enigma and The Eerie Science. The first book is on fine-tuning and the anthropic principle, while the second is on life in the universe. In the second, Davies concludes with his own views, one as a scientists, one from a philosophical perspective, and one as a person. Wearing his “scientist hat”, he conclude, “my answer is that we are probably the only intelligent beings in the observable universe, and I would not be very surprised if the solar system contains the only life in the observable universe. I arrive at this dismal conclusion because I see so many contingent features involved in the origin and evolution of life, and because I have yet to see a convincing theoretical argument for a universal principle of increasing organized complexity…”

Both of Davies books are quite like many, some that agree and many that don’t, in that they offer glimpses into the nature of scientific research related to the origins of life, the finely-tuned parameters (or why they actually aren’t finely tuned, although this is a minority position), but don’t require any real background knowledge. Then there are books that are at least semi-popular, such as Penrose’s The Road to Reality or the aforementioned The Anthropic Cosmological Principle, but are largely inaccessible to most readers (my father received his undergraduate degree in physics from one an Ivy League college, is an extremely intelligent individual, and didn’t get much past chapter 1 of Barrow & Tipler’s book).

That’s one thing I find particularly delightful about Bostrom’s book. It is technical in that it tackles reasoning and logic in a highly nuanced way. Although examples are given frequently to illustrate logical implications or flaws in particular inferences, the questions and issues tackled are fleshed out completely without skimping over any issue related to the rational, logic, validity, or justifications for any arguments.

Philosophical Texts on Reasoning and Rationality: What this book is better than

Better yet is that this book deals with subjects like whether the cosmos is finely tuned for intelligent life and if so what this means. The book is fundamentally concerned with advancing a coherent, logical, and justifiable framework for addressing kinds of questions like those in the examples. I have many books with similar goals: Heuristics and Biases: The Psychology of Intuitive Judgment, Probability Theory: The Logic of Science, Acceptable Premises: An Epistemic Approach to an Informal Logic Problem, Abductive Reasoning- Logical Investigations into Discovery and Explanation, Against Coherence: Truth, Probability, and Justification, Bayesian Epistemology, The Algebra of Probable Inference, Abductive Cognition: The Epistemological and Eco-Cognitive Dimensions of Hypothetical Reasoning, Model-Based Reasoning in Science and Technology: Theoretical and Cognitive Issues, and many dozens more. I enjoyed many of them and found all to be useful, but would recommend few if any to the general reader. That’s because they aren’t just technical, but only technical. They are “dry”, not just because they demand the reader deal with sophisticated nuances, but because they introduce their subject matter as their subject matter.

Now, there’s nothing wrong with this. In fact, it’s very hard to write a book about something, especially an academic monograph, without talking almost exclusively about that something. Of course, most books on methods in the sciences, certain kinds of reasoning or logics, epistemology, etc., give plenty of examples. But they are of the kind that we find e.g., in Bostrom’s 5th chapter “The self-sampling assumption in science” where there are sections on SSA in thermodynamics or evolutionary biology. Few books are able to recognize how far two related subjects (in this case, fine-tuning and the anthropic principle), both the main topic of countless popular books, can serve to introduce and cover in no small detail something like a specific kind of abstract reasoning. Bostrom not only found such a perfect way to thoroughly introduce the reader to so abstract a topic, he proceeds to cover it in more detail with a variety of interesting examples, and then uses a popular and fascinating probability paradox (the doomsday argument, one of the paradoxes in Eckhardt’s Paradoxes in Probability Theory, which cites Bostrom here) as yet another way to flesh out still finer points of his approach.

Thank you, and please help yourself to our complimentary gift on your way out

To embarrass myself by quoting the children’s television show Reading Rainbow, “but you don’t have to take my word for it.” If you think that fine-tuning, multiverse theory, the anthropic principle, and scientific reasoning might be interesting topics, but you don’t want to spend the money, another great thing about this book is that it is available in FULL for free and LEGALLY (well, I think legally, as it is available from the book’s website). So please, help yourself:

Anthropic Bias – complete text

Posted in Book Reviews | Tagged , , , , , , , , , , , | Leave a comment

Home of the new Research Reviews blog

The blog at legiononomamoi.wordpress.com, formerly Research Reviews, will be changing to a more general blog. This one will now be dedicated to reviewing research and I will eventually transfer all the posts that qualify to this site.

Posted in Uncategorized | Leave a comment