When the physics of Galileo and Newton displaced the physics of Aristotle, scientists tried to explain the world by discovering its deterministic natural laws. When the quantum physics of Bohr and Heisenberg in turn displaced the physics of Galileo and Newton, scientists realized they needed to supplement their deterministic natural laws by taking into account chance processes in their explanations of our universe. Chance and necessity, to use a phrase made famous by Jacques Monod, thus set the boundaries of scientific explanation.
Today, however, chance and necessity have proven insufficient to account for all scientific phenomena. Without invoking the rightly discarded teleologies, entelechies, and vitalisms of the past, one can still see that a third mode of explanation is required, namely, intelligent design. Chance, necessity, and design–these three modes of explanation–are needed to explain the full range of scientific phenomena. Not all scientists see that excluding intelligent design artificially restricts science, however. Richard Dawkins, an arch-Darwinist, begins his book The Blind Watchmaker by stating, „Biology is the study of complicated things that give the appearance of having been designed for a purpose.“ Statements like this echo throughout the biological literature. In What Mad Pursuit, Francis Crick, Nobel laureate and co-discoverer of the structure of DNA, writes, „Biologists must constantly keep in mind that what they see was not designed, but rather evolved.“
The biological community thinks it has accounted for the apparent design in nature through the Darwinian mechanism of random mutation and natural selection. The point to appreciate, however, is that in accounting for the apparent design in nature, biologists regard themselves as having made a successful scientific argument against actual design. This is important, because for a claim to be scientifically falsifiable, it must have the possibility of being true. Scientific refutation is a double-edged sword. Claims that are refuted scientifically may be wrong, but they are not necessarily wrong–they cannot simply be dismissed out of hand. To see this, consider what would happen if microscopic examination revealed that every cell was inscribed with the phrase „Made by Yahweh.“ Of course cells don’t have „Made by Yahweh“ inscribed on them, but that’s not the point. The point is that we wouldn’t know this unless we actually looked at cells under the microscope. And if they were so inscribed, one would have to entertain the thought, as a scientist, that they actually were made by Yahweh. So even those who do not believe in it tacitly admit that design always remains a live option in biology. A priori prohibitions against design are philosophically unsophisticated and easily countered. Nonetheless, once we admit that design cannot be excluded from science without argument, a weightier question remains: Why should we want to admit design into science?
To answer this question, let us turn it around and ask instead, Why shouldn’t we want to admit design into science? What’s wrong with explaining something as designed by an intelligent agent? Certainly there are many everyday occurrences that we explain by appealing to design. Moreover, in our workaday lives it is absolutely crucial to distinguish accident from design. We demand answers to such questions as, Did she fall or was she pushed? Did someone die accidentally or commit suicide? Was this song conceived independently or was it plagiarized? Did someone just get lucky on the stock market or was there insider trading?
Not only do we demand answers to such questions, but entire industries are devoted to drawing the distinction between accident and design. Here we can include forensic science, intellectual property law, insurance claims investigation, cryptography, and random number generation–to name but a few. Science itself needs to draw this distinction to keep itself honest. Just last January there was a report in Science that a Medline web search uncovered a „paper published in Zentralblatt für Gynäkologie in 1991 [containing] text that is almost identical to text from a paper published in 1979 in the Journal of Maxillofacial Surgery.“ Plagiarism and data falsification are far more common in science than we would like to admit. What keeps these abuses in check is our ability to detect them.
If design is so readily detectable outside science, and if its detectability is one of the key factors keeping scientists honest, why should design be barred from the content of science? Why do Dawkins and Crick feel compelled to constantly remind us that biology studies things that only appear to be designed, but that in fact are not designed? Why couldn’t biology study things that are designed? The biological community’s response to these questions has been to resist design absolutely. The worry is that for natural objects (unlike human artifacts) the distinction between design and non-design cannot be reliably drawn. Consider, for instance, the following remark by Darwin in the concluding chapter of his Origin of Species: „Several eminent naturalists have of late published their belief that a multitude of reputed species in each genus are not real species; but that other species are real, that is, have been independently created. . . . Nevertheless they do not pretend that they can define, or even conjecture, which are the created forms of life, and which are those produced by secondary laws. They admit variation as a vera causa in one case, they arbitrarily reject it in another, without assigning any distinction in the two cases.“ Biologists worry about attributing something to design (here identified with creation) only to have it overturned later; this widespread and legitimate concern has prevented them from using intelligent design as a valid scientific explanation.
Though perhaps justified in the past, this worry is no longer tenable. There now exists a rigorous criterion–complexity-specification–for distinguishing intelligently caused objects from unintelligently caused ones. Many special sciences already use this criterion, though in a pre-theoretic form (e.g., forensic science, artificial intelligence, cryptography, archeology, and the Search for Extra-Terrestrial Intelligence). The great breakthrough in philosophy of science and probability theory of recent years has been to isolate and make precise this criterion. Michael Behe’s criterion of irreducible complexity for establishing the design of biochemical systems is a special case of the complexity-specification criterion for detecting design (cf. Behe’s book Darwin’s Black Box).
What does this criterion look like? Although a detailed explanation and justification is fairly technical (for a full account see my book The Design Inference, published by Cambridge University Press), the basic idea is straightforward and easily illustrated. Consider how the radio astronomers in the movie Contact detected an extraterrestrial intelligence. This movie, which came out last year and was based on a novel by Carl Sagan, was an enjoyable piece of propaganda for the SETI research program–the Search for Extra-Terrestrial Intelligence. In the movie, the SETI researchers found extraterrestrial intelligence. (The nonfictional researchers have not been so successful.)
How, then, did the SETI researchers in Contact find an extraterrestrial intelligence? SETI researchers monitor millions of radio signals from outer space. Many natural objects in space (e.g., pulsars) produce radio waves. Looking for signs of design among all these naturally produced radio signals is like looking for a needle in a haystack. To sift through the haystack, SETI researchers run the signals they monitor through computers programmed with patternmatchers. As long as a signal doesn’t match one of the preset patterns, it will pass through the pattern-matching sieve (even if it has an intelligent source). If, on the other hand, it does match one of these patterns, then, depending on the pattern matched, the SETI researchers may have cause for celebration.
The SETI researchers in Contact found the following signal:
11011101111101111111011111111111011111111111110111111111111111
11011111111111111111110111111111111111111111110111111111111111
11111111111111011111111111111111111111111111110111111111111111
11111111111111111111110111111111111111111111111111111111111111
11011111111111111111111111111111111111111111110111111111111111
11111111111111111111111111111111011111111111111111111111111111
11111111111111111111111101111111111111111111111111111111111111
11111111111111111111111101111111111111111111111111111111111111
11111111111111111111111111111101111111111111111111111111111111
11111111111111111111111111111111111111110111111111111111111111
11111111111111111111111111111111111111111111111111110111111111
11111111111111111111111111111111111111111111111111111111111111
11111111011111111111111111111111111111111111111111111111111111
11111111111111111111111111111101111111111111111111111111111111
11111111111111111111111111111111111111111111111111111111110111
11111111111111111111111111111111111111111111111111111111111111
11111111111111111111111111111111011111111111111111111111111111
11111111111111111111111111111111111111111111111111111111111111
1111111111
In this sequence of 1126 bits, 1’s correspond to beats and 0’s to pauses. This sequence represents the prime numbers from 2 to 101,where a given prime number is represented by the corresponding number of beats (i.e., 1’s), and the individual prime numbers are separated by pauses (i.e., 0’s). The SETI researchers in Contact took this signal as decisive confirmation of an extraterrestrial intelligence. What is it about this signal that decisively indicates design? Whenever we infer design, we must establish two things–complexity and specification. Complexity ensures that the object in question is not so simple that it can readily be explained by chance. Specification ensures that this object exhibits the type of pattern that is the trademark of intelligence.
To see why complexity is crucial for inferring design, consider the following sequence of bits:
110111011111
These are the first twelve bits in the previous sequence
representing the prime numbers 2, 3, and 5 respectively. Now it is a sure bet that no SETI researcher, if confronted with this twelve-bit sequence, is going to contact the science editor at the New York Times, hold a press conference, and announce that an extraterrestrial intelligence has been discovered. No headline is going to read, „Aliens Master First Three Prime Numbers!“
The problem is that this sequence is much too short (i.e., has too little complexity) to establish that an extraterrestrial intelligence with knowledge of prime numbers produced it. A randomly beating radio source might by chance just happen to put out the sequence „110111011111.“ A sequence of 1126 bits representing the prime numbers from 2 to 101, however, is a different story. Here the sequence is sufficiently long (i.e., has enough complexity) to confirm that an extraterrestrial intelligence could have produced it. Even so, complexity by itself isn’t enough to eliminate chance and indicate design. If I flip a coin 1,000 times, I’ll participate in a highly complex (or what amounts to the same thing, highly improbable) event. Indeed, the sequence I end up flipping will be one in a trillion trillion trillion . . . , where the ellipsis needs twenty-two more „trillions.“ This sequence of coin tosses won’t, however, trigger a design inference. Though complex, this sequence won’t exhibit a suitable pattern. Contrast this with the sequence representing the prime numbers from 2 to 101. Not only is this sequence complex, it also embodies a suitable pattern. The SETI researcher who in the movie Contact discovered this sequence put it this way: „This isn’t noise, this has structure.“
What is a suitable pattern for inferring design? Not just any pattern will do. Some patterns can legitimately be employed to infer design whereas others cannot. It is easy to see the basic intuition here. Suppose an archer stands fifty meters from a large wall with bow and arrow in hand. The wall, let’s say, is sufficiently large that the archer can’t help but hit it. Now suppose each time the archer shoots an arrow at the wall, the archer paints a target around the arrow so that the arrow sits squarely in the bull’s-eye. What can be concluded from this scenario? Absolutely nothing about the archer’s ability as an archer. Yes, a pattern is being matched; but it is a pattern fixed only after the arrow has been shot. The pattern is thus purely ad hoc.
But suppose instead the archer paints a fixed target on the wall and then shoots at it. Suppose the archer shoots a hundred arrows, and each time hits a perfect bull’s-eye. What can be concluded from this second scenario? Confronted with this second scenario we are obligated to infer that here is a world-class archer, one whose shots cannot legitimately be explained by luck, but rather must be explained by the archer’s skill and mastery. Skill and mastery are of course instances of design.
Like the archer who fixes the target first and then shoots at it, statisticians set what is known as a rejection region prior to an experiment. If the outcome of an experiment falls within a rejection region, the statistician rejects the hypothesis that the outcome is due to chance. The pattern doesn’t need to be given prior to an event to imply design. Consider the following cipher text:
nfuijolt ju jt mjlf b xfbtfm
Initially this looks like a random sequence of letters and spaces–initially you lack any pattern for rejecting chance and inferring design.
But suppose next that someone comes along and tells you to treat this sequence as a Caesar cipher, moving each letter one notch down the alphabet. Behold, the sequence now reads,
methinks it is like a weasel
Even though the pattern is now given after the fact, it still is the right sort of pattern for eliminating chance and inferring design. In contrast to statistics, which always tries to identify its patterns before an experiment is performed, cryptanalysis must discover its patterns after the fact. In both instances, however, the patterns are suitable for inferring design.
Patterns divide into two types, those that in the presence of complexity warrant a design inference and those that despite the presence of complexity do not warrant a design inference. The first type of pattern is called a specification, the second a fabrication. Specifications are the non-ad hoc patterns that can legitimately be used to eliminate chance and warrant a design inference. In contrast, fabrications are the ad hoc patterns that cannot legitimately be used to warrant a design inference. This distinction between specifications and fabrications can be made with full statistical rigor (cf. The Design Inference).
Why does the complexity-specification criterion reliably detect design? To answer this, we need to understand what it is about intelligent agents that makes them detectable in the first place. The principal characteristic of intelligent agency is choice. Whenever an intelligent agent acts, it chooses from a range of competing possibilities.
This is true not just of humans and extraterrestrial intelligences, but of animals as well. A rat navigating a maze must choose whether to go right or left at various points in the maze. When SETI researchers attempt to discover intelligence in the radio transmissions they are monitoring, they assume an extraterrestrial intelligence could have chosen to transmit any number of possible patterns, and then attempt to match the transmissions they observe with the patterns they seek. Whenever a human being utters meaningful speech, he chooses from a range of utterable soundcombinations. Intelligent agency always entails discrimination–choosing certain things, ruling out others.
Given this characterization of intelligent agency, how do we recognize that an intelligent agent has made a choice? A bottle of ink spills accidentally onto a sheet of paper; someone takes a fountain pen and writes a message on a sheet of paper. In both instances ink is applied to paper. In both instances one among an almost infinite set of possibilities is realized. In both instances one contingency is actualized and others are ruled out. Yet in one instance we ascribe agency, in the other chance.
What is the relevant difference? Not only do we need to observe that a contingency was actualized, but we ourselves need also to be able to specify that contingency. The contingency must conform to an independently given pattern, and we must be able independently to formulate that pattern. A random ink blot is unspecifiable; a message written with ink on paper is specifiable. Wittgenstein in Culture and Value made the same point: „We tend to take the speech of a Chinese for inarticulate gurgling. Someone who understands Chinese will recognize language in what he hears.“
In hearing a Chinese utterance, someone who understands Chinese not only recognizes that one from a range of all possible utterances was actualized, but he is also able to identify the utterance as coherent Chinese speech. Contrast this with someone who does not understand Chinese. He will also recognize that one from a range of possible utterances was actualized, but this time, because he lacks the ability to understand Chinese, he is unable to tell whether the utterance was coherent speech.
To someone who does not understand Chinese, the utterance will appear gibberish. Gibberish–the utterance of nonsense syllables uninterpretable within any natural language–always actualizes one utterance from the range of possible utterances. Nevertheless, gibberish, by corresponding to nothing we can understand in any language, also cannot be specified. As a result, gibberish is never taken for intelligent communication, but always for what Wittgenstein calls „inarticulate gurgling.“
Experimental psychologists who study animal learning and behavior employ a similar method. To learn a task an animal must acquire the ability to actualize behaviors suitable for the task as well as the ability to rule out behaviors unsuitable for the task. Moreover, for a psychologist to recognize that an animal has learned a task, it is necessary not only to observe the animal making the appropriate discrimination, but also to specify this discrimination.
Thus to recognize whether a rat has successfully learned how to traverse a maze, a psychologist must first specify which sequence of right and left turns conducts the rat out of the maze. No doubt, a rat randomly wandering a maze also discriminates a sequence of right and left turns. But by randomly wandering the maze, the rat gives no indication that it can discriminate the appropriate sequence of right and left turns for exiting the maze. Consequently, the psychologist studying the rat will have no reason to think the rat has learned how to traverse the maze. Only if the rat executes the sequence of right and left turns specified by the psychologist will the psychologist recognize that the rat has learned how to traverse the maze.
Note that complexity is implicit here as well. To see this, consider again a rat traversing a maze, but now take a very simple maze in which two right turns conduct the rat out of the maze. How will a psychologist studying the rat determine whether it has learned to exit the maze? Just putting the rat in the maze will not be enough. Because the maze is so simple, the rat could by chance just happen to take two right turns, and thereby exit the maze. The psychologist will therefore be uncertain whether the rat actually learned to exit this maze, or whether the rat just got lucky.
But contrast this now with a complicated maze in which a rat must take just the right sequence of left and right turns to exit the maze. Suppose the rat must take one hundred appropriate right and left turns, and that any mistake will prevent the rat from exiting the maze. A psychologist who sees the rat take no erroneous turns and in short order exit the maze will be convinced that the rat has indeed learned how to exit the maze, and that this was not dumb luck.
This general scheme for recognizing intelligent agency is but a thinly disguised form of the complexity-specification criterion. In general, to recognize intelligent agency we must observe a choice among competing possibilities, note which possibilities were not chosen, and then be able to specify the possibility that was chosen. What’s more, the competing possibilities that were ruled out must be live possibilities, and sufficiently numerous (hence complex) so that specifying the possibility that was chosen cannot be attributed to chance.
All the elements in this general scheme for recognizing intelligent agency (i.e., choosing, ruling out, and specifying) find their counterpart in the complexity-specification criterion. It follows that this criterion formalizes what we have been doing right along when we recognize intelligent agency. The complexity-specification criterion pinpoints what we need to be looking for when we detect design. Perhaps the most compelling evidence for design in biology comes from biochemistry. In a recent issue of Cell (February 8, 1998),Bruce Alberts, president of the National Academy of Sciences, remarked, „The entire cell can be viewed as a factory that contains an elaborate network of interlocking assembly lines, each of which is composed of large protein machines. . . . Why do we call the large protein assemblies that underlie cell function machines? Precisely because, like the machines invented by humans to deal efficiently with the macroscopic world, these protein assemblies contain highly coordinated moving parts.“
Even so, Alberts sides with the majority of biologists in regarding the cell’s marvelous complexity as only apparently designed. The Lehigh University biochemist Michael Behe disagrees. In Darwin’s Black Box(1996), Behe presents a powerful argument for actual design in the cell.
Central to his argument is his notion of irreducible complexity. A system is irreducibly complex if it consists of several interrelated parts so that removing even one part completely destroys the system’s function. As an example of irreducible complexity Behe offers the standard mousetrap. A mousetrap consists of a platform, a hammer, a spring, a catch, and a holding bar. Remove any one of these five components, and it is impossible to construct a functional mousetrap.
Irreducible complexity needs to be contrasted with cumulative complexity. A system is cumulatively complex if the components of the system can be arranged sequentially so that the successive removal of components never leads to the complete loss of function. An example of a cumulatively complex system is a city. It is possible successively to remove people and services from a city until one is down to a tiny village–all without losing the sense of community, the city’s“function.“
From this characterization of cumulative complexity, it is clear that the Darwinian mechanism of natural selection and random mutation can readily account for cumulative complexity. Darwin’s account of how organisms gradually become more complex as favorable adaptations accumulate is the flip side of the city in our example from which people and services are removed. In both cases, the simpler and more complex versions both work, only less or more effectively.
But can the Darwinian mechanism account for irreducible complexity? Certainly, if selection acts with reference to a goal, it can produce irreducible complexity. Take Behe’s mousetrap. Given the goal of constructing a mousetrap, one can specify a goal-directed selection process that in turn selects a platform, a hammer, a spring, a catch, and a holding bar, and at the end puts all these components together to form a functional mousetrap. Given a pre-specified goal, selection has no difficulty producing irreducibly complex systems.
But the selection operating in biology is Darwinian natural selection. And by definition this form of selection operates without goals, has neither plan nor purpose, and is wholly undirected. The great appeal of Darwin’s selection mechanism was, after all, that it would eliminate teleology from biology. Yet by making selection an undirected process, Darwin drastically reduced the type of complexity biological systems could manifest. Henceforth biological systems could manifest only cumulative complexity, not irreducible complexity. As Behe explains in Darwin’s Black Box: „An irreducibly complex system cannot be produced . . . by slight, successive modifications of a precursor system, because any precursor to an irreducibly complex system that is missing a part is by definition nonfunctional. . .. Since natural selection can only choose systems that are already working, then if a bio logical system cannot be produced gradually it would have to arise as an integrated unit, in one fell swoop, for natural selection to have anything to act on.“ For an irreducibly complex system, function is attained only when all components of the system are in place simultaneously. It follows that natural selection, if it is going to produce an irreducibly complex system, has to produce it all at once or not at all. This would not be a problem if the systems in question were simple. But they’re not. The irreducibly complex biochemical systems Behe considers are protein machines consisting of numerous distinct proteins, each indispensable for function; together they are beyond what natural selection can muster in a single generation.
One such irreducibly complex biochemical system that Behe considers is the bacterial flagellum. The flagellum is a whip-like rotary motor that enables a bacterium to navigate through its environment. The flagellum includes an acidpowered rotary engine, a stator, O-rings, bushings, and a drive shaft. The intricate machinery of this molecular motor requires approximately fifty proteins. Yet the absence of any one of these proteins results in the complete loss of motor function.
The irreducible complexity of such biochemical systems cannot be explained by the Darwinian mechanism, nor indeed by any naturalistic evolutionary mechanism proposed to date. Moreover, because irreducible complexity occurs at the biochemical level, there is no more fundamental level of biological analysis to which the irreducible complexity of biochemical systems can be referred, and at which a Darwinian analysis in terms of selection and mutation can still hope for success. Undergirding biochemistry is ordinary chemistry and physics, neither of which can account for biological information. Also, whether a biochemical system is irreducibly complex is a fully empirical question: Individually knock out each protein constituting a biochemical system to determine whether function is lost. If so, we are dealing with an irreducibly complex system. Experiments of this sort are routine in biology.
The connection between Behe’s notion of irreducible complexity and my complexity-specification criterion is now straightforward. The irreducibly complex systems Behe considers require numerous components specifically adapted to each other and each necessary for function. That means they are complex in the sense required by the complexity-specification criterion.
Specification in biology always makes reference in some way to an organism’s function. An organism is a functional system comprising many functional subsystems. The functionality of organisms can be specified in any number of ways. Arno Wouters does so in terms of the viability of whole organisms, Michael Behe in terms of the minimal function of biochemical systems. Even Richard Dawkins will admit that life is specified functionally, for him in terms of the reproduction of genes. Thus in The Blind Watchmaker Dawkins writes, „Complicated things have some quality, specifiable in advance, that is highly unlikely to have been acquired by random chance alone. In the case of living things, the quality that is specified in advance is . . . the ability to propagate genes in reproduction.“
So there exists a reliable criterion for detecting design strictly from observational features of the world. This criterion belongs to probability and complexity theory, not to metaphysics and theology. And although it cannot achieve logical demonstration, it does achieve a statistical justification so compelling as to demand assent. This criterion is relevant to biology. When applied to the complex, information-rich structures of biology, it detects design. In particular, we can say with the weight of science behind us that the
complexity-specification criterion shows Michael Behe’s irreducibly complex biochemical systems to be designed. What are we to make of these developments? Many scientists remain unconvinced. Even if we have a reliable criterion for detecting design, and even if that criterion tells us that biological systems are designed, it seems that determining a biological system to be designed is akin to shrugging our shoulders and saying God did it. The fear is that admitting design as an explanation will stifle scientific inquiry, that scientists will stop investigating difficult problems because they have a sufficient explanation already.
But design is not a science stopper. Indeed, design can foster inquiry where traditional evolutionary approaches obstruct it. Consider the term „junk DNA.“ Implicit in this term is the view that because the genome of an organism has been cobbled together through along, undirected evolutionary process, the genome is a patchwork of which only limited portions are essential to the organism. Thus on an evolutionary view we expect a lot of useless DNA. If, on the other hand, organisms are designed, we expect DNA, as much as possible, to exhibit function. And indeed, the most recent findings suggest that designating DNA as „junk“ merely cloaks our current lack of knowledge about function. For instance, in a recent issue of the Journal of Theoretical Biology, John Bodnar describes how „non-coding DNA in eukaryotic genomes encodes a language which programs organismal growth and development.“ Design encourages scientists to look for function where evolution discourages it.
Or consider vestigial organs that later are found to have a function after all. Evolutionary biology texts often cite the human coccyx as a „vestigial structure“ that hearkens back to vertebrate ancestors with tails. Yet if one looks at a recent edition of Gray’s Anatomy, one finds that the coccyxis a crucial point of contact with muscles that attach to the pelvic floor. The phrase „vestigial structure“ often merely cloaks our current lack of knowledge about function. The human appendix, formerly thought to be vestigial, is now known to be a functioning component of the immune system.
Admitting design into science can only enrich the scientific enterprise. All the tried and true tools of science will remain intact. But design adds a new tool to the scientist’s explanatory tool chest. Moreover, design raises a whole new set of research questions. Once we know that something is designed, we will want to know how it was produced, to what extent the design is optimal, and what is its purpose. Note that we can detect design without knowing what something was designed for. There is a room at the Smithsonian filled with objects that are obviously designed but whose specific purpose anthropologists do not understand.
Design also implies constraints. An object that is designed functions within certain constraints. Transgress those constraints and the object functions poorly or breaks. Moreover, we can discover those constraints empirically by seeing what does and doesn’t work. This simple insight has tremendous implications not just for science but also for ethics. If humans are in fact designed, then we can expect psychosocial constraints to be hardwired into us. Transgress those constraints, and we as well as our society will suffer. There is plenty of empirical evidence to suggest that many of the attitudes and behaviors our society promotes undermine human flourishing. Design promises to reinvigorate that ethical stream running from Aristotle through Aquinas known as natural law.
By admitting design into science, we do much more than simply critique scientific reductionism. Scientific reductionism holds that everything is reducible to scientific categories. Scientific reductionism is self-refuting and easily seen to be self-refuting. The existence of the world, the laws by which the world operates, the intelligibility of the world, and the unreasonable effectiveness of mathematics for comprehending the world are just a few of the questions that science raises, but that science is incapable of answering. Simply critiquing scientific reductionism, however, is not enough. Critiquing reductionism does nothing to change science. And it is science that must change. By eschewing design, science has for too long operated with an inadequate set of conceptual categories. This has led to a constricted vision of reality, skewing how science understands not just the world, but also human beings.
Martin Heidegger remarked in Being and Time that „a science’s level of development is determined by the extent to which it is capable of a crisis in its basic concepts.“ The basic concepts with which science has operated these last several hundred years are no longer adequate, certainly not in an information age, certainly not in an age where design is empirically detectable. Science faces a crisis of basic concepts. The way out of this crisis is to expand science to include design. To admit design into science is to liberate science, freeing it from restrictions that can no longer be justified.
Nachdruck mit Genehmigung des Autors.
Siehe auch www.designinference.com und www.designinference.com/documents/1998.10.science_and_design.htm
———————————————————————
Prof. Dr. Dr. William A. Dembski ist Associate Research Professor für Conceptual Foundations of Science am Baylor University’s Institute for Faith and Learning; Senior Fellow am Discovery Institute’s Center for Science and Culture; Executive Director of the International Society for Complexity, Information, and Design (www.iscid.org). Er hat folgende akademische Abschlüsse:
B.A. in Psychologie (University of Illinois at Chicago)
M.S. in Statistik (University of Illinois at Chicago)
S.M. in Mathematik (University of Chicago)
Ph.D. in Mathematik (University of Chicago)
M.A. in Philosophie (University of Illinois at Chicago)
Ph.D. in Philosophie (University of Illinois at Chicago)
M.Div. in Theologie (Princeton Theological Seminary).
Fellowships/Awards:
Nancy Hirshberg Memorial Prize for best undergraduate research paper in psychology at the University of Illinois at Chicago, 1981.
National Science Foundation Graduate Fellowship for psychology and mathematics, 1982-1985
McCormick Fellowship (University of Chicago) for mathematics, 1984-1988
National Science Foundation Postdoctoral Fellowship for mathematics, 1988-1991
Northwestern University Postdoctoral Fellowship (Department of Philosophy) for history and philosophy of science, 1992-1993
Pascal Centre Research Fellowship for studies in science and religion, 1992-1995
Notre Dame Postdoctoral Fellowship (Department of Philosophy) for philosophy of religion, 1996-1997
Discovery Institute Fellowship for research in intelligent design, 1996-1999
Templeton Foundation Book Prize ($100,000) for writing book on information theory, 2000-2001
Akademische Tätigkeiten:
Lecturer, University of Chicago, Department of Mathematics teaching undergraduate mathematics, 1987-1988
Postdoctoral Visiting Fellow, MIT, Department of Mathematics research in probability theory, 1988
Postdoctoral Visiting Fellow, University of Chicago, James Franck Institute research in chaos & probability, 1989
Research Associate, Princeton University, Department of Computer Science research in cryptography & complexity theory, 1990 Postdoctoral Fellow, Northwestern University, Department of Philosophy teaching philosophy of science + research, 1992-1993
Independent Scholar, Center for Interdisciplinary Studies, Princeton research in complexity, information, and design, 1993-1996
Postdoctoral Fellow, University of Notre Dame, Department of Philosophy teaching philosophy of religion + research, 1996-1997
Adjunct Assistant Professor, University of Dallas, Department of Philosophy teaching introduction to philosophy, 1997-1999
Fellow, Discovery Institute, Center for the Renewal of Science and Culture research in complexity, information, and design, 1996-present Associate Research Professor, Institute for Faith and Learning, Baylor University research in intelligent design, 1999-present
Mitgliedschaften:
Discovery Institute-senior fellow
Wilberforce Forum-senior fellow
Foundation for Thought and Ethics-academic editor
Origins & Design-associate editor
Princeton Theological Review-editorial board
Torrey Honors Program, Biola University-advisory board
American Scientific Affiliation
Evangelical Philosophical Society
Access Research Network
International Society for Complexity, Information, and Design-executive director
Weitere akademische Aktivitäten:
Endowed Lectures „Truth in an Age of Uncertainty and Relativism.“ Dom. Luke Child’s Lecture, Portsmouth Abbey School, 30 September 1988.
„Science, Theology, and Intelligent Design.“ Staley Lectures, Central College, Iowa, 4-5 March 1998.
„Intelligent Design: Bridging Science and Faith.“ Staley Lectures, Union University, Tennessee, 28 February – 1 March 2000.
„Intelligent Design.“ Staley Lectures, Anderson College, Anderson, South Carolina, 15 & 16 January 2002.
„The Design Revolution.“ Norton Lectures, Southern Baptist Theological Seminary, Louisville, Kentucky, 11 & 12 February 2003.
Participant, International Institute of Human Rights in Strasbourg France, 28 June to 27 July 1990.
Summer research in design, Cambridge University, sponsored by Pascal Centre (Ancaster, Ontario, Canada), 1 July to 4 August 1992. Participant, The Status of Darwinian Theory and Origin of Life Studies, Pajaro Dunes, California, 22-24 June 1993.
Faculty in theology and science at the C. S. Lewis Summer Institute, Cosmos and Creation. Cambridge University, Queen’s College, 10-23 July 1994.
Canadian lecture tour on intelligent design (Simon Fraser University, University of Calgary, and University of Saskatchewan), sponsored by the New Scholars Society, 4-6 February 1998.
Faculty in theology and science at the C. S. Lewis International Centennial Celebration, Loose in the Fire. Oxford and Cambridge Universities, 19 July to 1 August 1998.
The Nature of Nature, conference at Baylor University, 12-15 April 2002, organized by WmAD and Bruce Gordon.
Seminar Organizer, „Design, Self-Organization, and the Integrity of Creation,“ Calvin College Seminar in Christian Scholarship, 19 June – 28 July 2000. Follow-up conference 24-26 May 2001 (speakers included Alvin Plantinga, John Haught, and Del Ratzsch).
Contributor, „Prospects for Post-Darwinian Science,“ symposium, New College, Oxford, August 2000. Other contributors included Michael Denton, Peter Saunders, Mae-Wan Ho, David Berlinski, Jonathan Wells, Stephen Meyer, and Simon Conway Morris.
Participant, Symposium on Design Reasoning, Calvin College, 22-23 May 2001. Other participants were Stephen Meyer, Paul Nelson, Rob Koons, Del Ratzsch, Robin Collins, Tim & Lydia McGrew. Tim will edited the proceedings for an academic press.
Presenter, on topic of detecting design, 23-27 July 2001 at Wycliffe Hall, Oxford University in the John Templeton Oxford Seminars on Science and Christianity.
Debate with Massimo Pigliucci, „Is Intelligent Design Smart Enough?“ New York Academy of Sciences, 1 November 2001.
Debate with Michael Shermer, „Does Science Prove God?“ Clemson University, 7 November 2001.
Discussion with Stuart Kauffman, „Order for Free vs. No Free Lunch,“ Center for Advanced Studies, University of New Mexico, 13 November 2001.
Program titled „Darwin under the Microscope,“ PBS television interview for Uncommon Knowledge with Peter Robinson facing Eugenie Scott and Robert Russell, 7 December 2001
Canadian lecture tour on intelligent design (University of Guelph, University of Toronto, and McMasters University), sponsored by the Canadian Scientific and Christian Affiliation, 6-8 March 2002.
Debate titled „God or Luck: Creationism vs. Evolution,“ with Steven Darwin, professor of botany, Tulane University, New Orleans, 7 October 2002.
Veröffentlichungen:
Bücher:
The Design Inference: Eliminating Chance through Small Probabilities. Cambridge: Cambridge University Press, 1998.
Intelligent Design: The Bridge between Science and Theology. Downer’s Grove, Ill.: InterVarsity Press, 1999. [Award: Christianity Today’s Book of the Year in the category „Christianity and Culture.“]
No Free Lunch: Why Specified Complexity Cannot Be Purchased without Intelligence. Lanham, Md.: Rowman & Littlefield, 2002.
Edited Collections:
Mere Creation: Science, Faith, and Intelligent Design (proceedings of a conference on design and origins at Biola University, 14 – 17 November 1996). Downer’s Grove, Ill.: InterVarsity Press, 1998.
Science and Evidence for Design in the Universe, Proceedings of the Wethersfield Institute, vol. 9 (co-edited with Michael J. Behe and Stephen C. Meyer). San Francisco: Ignatius Press, 2000.
Unapologetic Apologetics: Meeting the Challenges of Theological Studies (co-edited with Jay Wesley Richards; selected papers from the Apologetics Seminar at Princeton Theological Seminary, 1995-1997). Downer’s Grove, Ill.: InterVarsity Press, 2001.
Signs of Intelligence: Understanding Intelligent Design (co-edited with James Kushiner). Grand Rapids, Mich.: Brazos Press, 2001.
Arktikel:
„Uniform Probability.“ Journal of Theoretical Probability 3(4), 1990: 611-626.
„Scientopoly: The Game of Scientism.“ Epiphany Journal 10(1&2), 1990: 110-120.
„Converting Matter into Mind: Alchemy and the Philosopher’s Stone in Cognitive Science.“ Perspectives on Science and Christian Faith 42(4), 1990: 202-226. Abridged version in Epiphany Journal 11(4), 1991: 50-76. My response to subsequent critical comment: „Conflating Matter and Mind“ in Perspectives on Science and Christian Faith 43(2), 1991: 107-111.
„Inconvenient Facts: Miracles and the Skeptical Inquirer.“ Philosophia Christi (formerly Bulletin of the Evangelical Philosophical Society) 13, 1990: 18-45.
„Randomness by Design.“ Nous 25(1), 1991: 75-106.
„Reviving the Argument from Design: Detecting Design through Small Probabilities.“ Proceedings of the 8th Biannual Conference of the Association of Christians in the Mathematical Sciences (at Wheaton College), 29 May – 1 June 1991: 101-145.
„The Incompleteness of Scientific Naturalism.“ In Darwinism: Science or Philosophy? edited by Jon Buell and Virginia Hearn (Proceedings of the Darwinism Symposium held at Southern Methodist University, 26-28 March 1992), pp. 79-94. Dallas: Foundation for Thought and Ethics, 1994.
„On the Very Possibility of Intelligent Design.“ In The Creation Hypothesis, edited by J. P. Moreland, pp. 113-138. Downers Grove: InterVarsity Press, 1994.
„What Every Theologian Should Know about Creation, Evolution, and Design.“ Princeton Theological Review 2(3), 1995: 15-21.
„Transcendent Causes and Computational Miracles.“ In Interpreting God’s Action in the World (Facets of Faith and Science, volume 4), edited by J. M. van der Meer. Lanham: The Pascal Centre for Advanced Studies in Faith and Science/ University Press of America, 1996.
„The Problem of Error in Scripture.“ Princeton Theological Review 3(1)(double issue), 1996: 22-28.
„Teaching Intelligent Design as Religion or Science?“ Princeton Theological Review 3(2), 1996: 14-18.
„Schleiermacher’s Metaphysical Critique of Miracles.“ Scottish Journal of Theology 49(4), 1996: 443-465.
„Christology and Human Development.“ FOUNDATIONS 5(1), 1997: 11-18.
„Intelligent Design as a Theory of Information“ (revision of 1997 NTSE conference paper). Perspectives on Science and Christian Faith 49(3), 1997: 180-190.
„Fruitful Interchange or Polite Chitchat? The Dialogue between Theology and Science“ (co-authored with Stephen C. Meyer). Zygon 33(3), 1998: 415-430.
„Mere Creation.“ In Mere Creation: Science, Faith, and Intelligent Design.
„Redesigning Science.“ In Mere Creation: Science, Faith, and Intelligent Design. „Science and Design.“ First Things no. 86, October 1998: 21-27. „Reinstating Design within Science.“ Rhetoric and Public Affairs 1(4), 1998: 503-518.
„Signs of Intelligence: A Primer on the Discernment of Intelligent Design.“ Touchstone 12(4), 1999: 76-84.
„Are We Spiritual Machines?“ First Things no. 96, October 1999: 25-31. „Not Even False? Reassessing the Demise of British Natural Theology.“ Philosophia Christi 2nd series, 1(1), 1999: 17-43.
„Naturalism and Design.“ In Naturalism: A Critical Analysis, edited by William Lane Craig and J. P. Moreland (London: Routledge, 2000). „Conservatives, Darwin & Design: An Exchange“ (co-authored with Larry Arnhart and Michael J. Behe). First Things no. 107 (November 2000): 23-31.
„The Third Mode of Explanation.“ In Science and Evidence for Design in the Universe, edited by Michael J. Behe, William A. Dembski, and Stephen C. Meyer (San Francisco: Ignatius, 2000).
„The Mathematics of Detecting Divine Action.“ Mathematics in a Postmodern Age: A Christian Perspective, edited by James Bradley and Russell Howell (Grand Rapids, Mich.: Eerdmans, 2001).
„The Pragmatic Nature of Mathematical Inquiry.“ Mathematics in a Postmodern Age: A Christian Perspective, edited by James Bradley and Russell Howell (Grand Rapids, Mich.: Eerdmans, 2001).
„Detecting Design by Eliminating Chance: A Response to Robin Collins.“ In Christian Scholar’s Review 30(3), Spring 2001: 343-357.
„The Inflation of Probabilistic Resources.“ In God and Design: The Teleological Argument and Modern Science, edited by Neil Manson. (London: Routledge, to appear 2002).
„Can Evolutionary Algorithms Generate Specified Complexity?“ In From Complexity to Life, edited by Niels H. Gregersen, foreword by Paul Davies (Oxford: Oxford University Press, 2002).
„Design and Information.“ To appear in Detecting Design in Creation, edited by Stephen C. Meyer, Paul A. Nelson, and John Mark Reynolds. „Why Natural Selection Can’t Design Anything,“ Progress in Complexity, Information, and Design 1(1), 2002: iscid.org/papers/Dembski_WhyNatural_112901.pdf
„Random Predicate Logic I: A Probabilistic Approach to Vagueness,“ Progress in Complexity, Information, and Design 1(2-3), 2002: www.iscid.org/papers/Dembski_RandomPredicate_072402.pdf „Another Way to Detect Design?“ Progress in Complexity, Information, and Design 1(4), 2002: iscid.org/papers/Dembski_DisciplinedScience_102802.pdf „Evolution’s Logic of Credulity: An Unfettered Response to Allen Orr,“ Progress in Comlexity, Information, and Design 1(4), 2002: www.iscid.org/papers/Dembski_ResponseToOrr_010703.pdf
„The Chance of the Gaps,“ in God and Design: The Teleological Argument and Modern Science, edited by Neil Manson, Routledge, forthcoming 2003.
Short Contributions:
„Reverse Diffusion-Limited Aggregation.“ Journal of Statistical Computation and Simulation 37(3&4), 1990: 231-234.
„The Fallacy of Contextualism.“ Themelios 20(3), 1995: 8-11.
„The God of the Gaps.“ Princeton Theological Review 2(2), 1995: 13-16. „The Paradox of Politicizing the Kingdom.“ Princeton Theological Review 3(1)(double issue), 1996: 35-37.
„Alchemy, NK Boolean Style“ (review of Stuart Kauffman’s At Home in the Universe). Origins & Design 17(2), 1996: 30-32.
„Intelligent Design: The New Kid on the Block.“ The Banner 133(6), 16 March 1998: 14-16.
„The Intelligent Design Movement.“ Cosmic Pursuit 1(2), 1998: 22-26. „The Bible by Numbers“ (review of Jeffrey Satinover’s Cracking the Bible Code). First Things, August/September 1998 (no. 85): 61-64. „Randomness.“ In Routledge Encyclopedia of Philosophy, edited by Edward Craig. London: Routledge, 1998.
„The Last Magic“ (review of Mark Steiner’s The Applicability of Mathematics as a Philosophical Problem). Books & Culture, July/August 1999. [Award: Evangelical Press Association, First Place for 1999 in the category „Critical Reviews.“]
„Thinkable and Unthinkable“ (review of Paul Davies’s The Fifth Miracle). Books & Culture, September/October 1999: 33-35.
„The Arrow and the Archer: Reintroducing Design into Science.“ Science & Spirit 10(4), 1999(Nov/Dec): 32-34, 42.
„What Can We Reasonably Hope For? – A Millennium Symposium.“ First Things no. 99, January 2000: 19-20.
„Because It Works, That’s Why!“ (review of Y. M. Guttmann’s The Concept of Probability in Statistical Physics). Books & Culture, March/April 2000: 42-43.
„The Design Argument.“ In The History of Science and Religion in the Western Tradition: An Encyclopedia, edited by Gary B. Ferngren (New York: Garland, 2000), 65-67.
„The Limits of Natural Teleology“ (review of Robert Wright’s Nonzero: The Logic of Human Destiny). First Things no. 105 (August/September 2000): 46-51.
„Conservatives, Darwin & Design: An Exchange“ (co-authored with Larry Arnhart and Michael J. Behe). First Things no. 107 (November 2000): 23-31.
„Shamelessly Doubting Darwin,“ American Outlook (November/December 2000): 22-24.
„Intelligent Design Theory.“ In Religion in Geschichte und Gegenwart, 4th edition, edited by Hans Dieter Betz, Don S. Browning, Bernd Janowski, Eberhard Jüngel. Tübingen: Mohr Siebeck.
„What Have Butterflies Got to Do with Darwin?“ Review of Bernard d’Abrera’s Concise Atlas of Butterflies. Progress in Complexity, Information, and Design 1(1), 2002: www.iscid.org/papers/Dembski_BR_Butterflies_122101.pdf „Detecting Design in the Natural Sciences,“ Natural History 111(3), April 2002: 76.
„The Design Argument,“ in Science and Religion: A Historical Introduction, edited by Gary B. Ferngren (Baltimore: Johns Hopkins Press, 2002), 335-344 .
„How the Monkey Got His Tail,“ Books & Culture, November/December 2002: 42 (book review of S. Orzack and E. Sober, Adaptationism and Optimality).
„Detecting Design in the Natural Sciences,“ to appear in Russian translation in Poisk. Expanded version of Natural History article.
Work in Progress:
Debating Design: From Darwin to DNA, co-edited with Michael Ruse; an edited collection representing Darwinian, self-organizational, theistic evolutionist, and design-theoretic perspectives; book under contract with Cambridge University Press.
The Design Revolution: Making a New Science and Worldview, cultural and public policy implications of intelligent design; book under contract with InterVarsity Press.
Freeing Inquiry from Ideology: A Michael Polanyi Reader, co-edited with Bruce Gordon; an anthology of Michael Polanyi’s writings; book under contract with InterVarsity Press.
Uncommon Dissent: Intellectuals Who Find Darwinism Unconvincing, edited collection of essays by intellectuals who doubt Darwinism on scientific and rational grounds; book under contract with Intercollegiate Studies Institute.
The End of Christianity, coauthored with James Parker III, book under contract with Broadman & Holman.
Of Pandas and People: The Intelligent Design of Biological Systems, academic editor for third updated edition, coauthored with Michael Behe, Percival Davis, Dean Kenyon, and Jonathan Wells.
Being as Communion: The Metaphysics of Information, Templeton Book Prize project, proposal submitted to Ashgate publishers for series in science and religion.
The Patristic Understanding of Creation, co-edited with Brian Frederick; anthology of writings from the Church Fathers on creation and design.