SIMPLIFIED MODELS. Even though a complete model of a real-world experimental system would have to include everything in the universe, a more useful model is obtained by constructing a simplified representation that includes only the relevant entities and interactions, omitting everything whose effect on the outcome is considered negligible. .....
One simplifying strategy is to construct a family of models (Giere, 1988) that are variations on a basic theme. For example, we could begin with Newtonian Physics, and simplify its application by constructing a stripped-down model of a system. When applying Newton's Theory to a falling ball, a stripped-down model might ignore the effects of air resistance, and the change in gravitational force as the ball changes altitude. For some purposes this simplified model is sufficient to make a calculation (a prediction) with satisfactory accuracy. And if scientists want a more complete model, they can include one or more “correction factors” that previously were ignored. The inclusion of different correction factors produces a family of related models with varying degrees of completeness, each useful for a different situation and objective.
For example, if a bowling ball is dropped from a height of 2 meters, ignoring air resistance will allow calculation-predictions that are satisfactory for almost all purposes. But when a tennis ball falls 50 meters, if air resistance is ignored the predictions are significantly inaccurate. And a rocket will not make it to the moon based on models (used for making calculation-predictions) that do not include air resistance and the variation of gravity with altitude.
In comparing these situations there are two major variables: the weighting of factors (which depends on goals), and degrees of predictive contrast. Weighting of factors: for the moon rocket a demand for empirical accuracy is more important than the advantages of conceptual simplicity, but for most bowling ball scenarios the opposite is true. Predictive contrast: for the moon rocket there is a high degree of predictive contrast between alternative theories (one theory with air resistance and gravity variations, the other without) and the complex theory makes predictions that are more accurate; but for the bowling ball there is a low degree of predictive contrast between these theories, so empirical evaluation does not significantly favor either model.
COPING WITH COMPLEXITY. A common strategy for developing a simple theory about a complex system is to tolerate a reduction in empirical adequacy. For example, Galileo was able to develop a mathematical treatment of physics because he was willing to relax the constraints imposed by demands for empirical accuracy; he did not try to obtain an exact agreement with observations. His approach to theorizing — by focusing on the analysis of imaginary idealized systems — was controversial because Galileo and his critics disagreed about the fundamental goals of science, because Galileo challenged the traditional criterion that exact empirical agreement was a necessary condition for an adequate theory. In this area, Galileo and his critics disagreed about a fundamental goal of science.
[note: this sub-section was not part of the original "Section 2"]
When we ask “which theory-based model is best?” a rational response depends on how we will use the model. For example, do we want to describe what happens when we drop a bowling ball 2 meters? drop a tennis ball 50 meters? fly a rocket to the moon? predict the average distance traveled by cosmic muons? describe the cosmology of our universe? For these 5 questions, respectively, most scientists will prefer 5 different theory/models: an idealized Newtonian model, ignoring air resistance (bowling ball); idealized Newtonian model that includes air resistance (tennis ball); exact Newtonian theory, with air resistance + variation of gravity with location (moon rocket); Einstein's Theory of Special Relativity (cosmic muons);* Einstein's Theory of General Relativity (Big Bang Cosmology).
A Wide Range of Domains: Each theory/model is useful for a different domain of experimental systems. Any of the five can be used for the bowling ball, but most of us will use a stripped-down simplified Newtonian Model because this is easier. Newton's Theory, in simplified or exact forms, is sufficient for low speeds. Special Relativity, which is a simplified version of General Relativity, is valid at any speed but only for non-accelerated uniformly moving frames of reference. General Relativity is valid for all reference frames, but it "breaks down" for very small masses where Quantum Physics is necessary. And both General Relativity & Quantum Physics fail at extremely small distances, where a Theory of Quantum Gravity is needed but is not yet available.
Constructing Models by Simplification: Scientists can begin with General Relativity, and use simplifying assumptions to mathematically convert General Relativity into Special Relativity (for muons) and then into Newtonian Theories that are convenient for the rocket, tennis ball, or bowling ball.
* In a college physics course, I told another student that I became a “true believer” in Special Relativity after thinking about the logical consequences of a Mental Experiment in which light reflects off a mirror on a moving train (in 16.2); if some things are constant in Einstein's Theory of Invariance (his focus was invariance, not relativity), then other things must be relative. My friend responded by saying that he believed when he thought about data from Physical Experiments with cosmic muons. We both agreed that the other experiment also was persuasive, but our initial responses to the experiments — I was impressed by the logic of the Mental Experiment, he was more impressed by the Reality Check of the Physical Experiment — was interesting.
TENSIONS BETWEEN CONFLICTING CRITERIA. These conflicts are common. For example, in a famous statement of simplicity known as Occam's Razor — "entities should not be multiplied, except from necessity" — a preference for ontological economy ("entities should not be multiplied") can be overcome by necessity. But evaluation of necessity, such as judging whether a theory revision is improvement or ad hoc tinkering, is often difficult, and may require a deep understanding of a theory and its domain of application, plus sophisticated analysis.
A common reason for non-simplicity is a desire for empirical adequacy, since including additional components in a theory may help it predict observations more accurately and consistently. Another reason is to construct a more complete model for the composition-and-operation of systems.
Sometimes, however, there is a decision to decrease completeness in order to achieve certain types of goals. In this situation, although scientists know their model is being made less complete, whatever loss occurs due to simplification (and it may not be much) is compared with the benefits gained, in an attempt to seek a balance, to construct a theory that is optimally accurate-and-useful. Potential benefits of simplification may include an increase in cognitive utility [discussed below] by making a model easier to learn and use, or by focusing attention on the essential aspects of a model.
If it is constructed skillfully, with wise decisions about including and excluding components, a theory that is more complete is usually more empirically adequate. But not always. A model can be over-simplified by omitting relevant factors that should be included, or it can be over-complicated by including factors that could be omitted, or should be omitted. .....
FALSE BUT USEFUL. Wimsatt (1987) discusses some ways that a false model can be scientifically useful. Even if a model is wrong in some ways, it may inspire the design of interesting experiments. It may stimulate new ways of thinking that lead to the critical examination and revision (or rejection) of another theory. It may stimulate a search for empirical patterns in data. Or it may serve as a starting point for further development; by continually refining and revising a false model, perhaps a better model can be developed.
Many of Wimsatt's descriptions of utility involve a model that is false (i.e. not totally true) due to an incomplete composition-and-operation description of components for entities, actions, or interactions. When the erroneous predictions of an incomplete model are analyzed, this can provide information about the effects of components that have been omitted or oversimplified. For example, to study how a damping force affects pendulum motion, scientists can design a series of experimental systems, and for each system they compare their observations with the predictions of several models, each with a different characterization of the damping force; then they can analyze the results, in order to evaluate the advantages and disadvantages of each characterization. Or consider the Castle-Hardy-Weinberg Model for population genetics, which intentionally assumes an idealized system that never occurs in nature; deviations from the model's predictions indicate possibilities for evolutionary change in the gene pool of a population.
Theory evaluation can focus on plausibility or utility by asking “Is the theory an accurate representation of nature?” or “Is it useful?” This section will discuss the second question by describing scientific utility in terms of cognitive utility (for inspiring and facilitating productive thinking about a theory and its applications) and [but not in the excerpts selected for this page] research utility (for stimulating and guiding theoretical or experimental research). Theory evaluation based on utility is personalized; it will depend on point of view and context, because goals vary among scientists, and can change from one context to another.
THEORY STRUCTURE and COGNITIVE UTILITY. Differences in theory structure can produce differences in cognitive structuring and problem-solving utility, and will affect the harmony between a theory and the thinking styles — due to heredity, personal experience, and cultural influence — of a scientist or a scientific community. If competing theories differ in logical structure, evaluation will be influenced by scientists' affinity for the structure that more closely matches their preferred styles of thinking.
ALTERNATIVE REPRESENTATIONS. Even for the same theory, representations can differ.
For example, a physics theory can symbolically represent a phenomenon by words (such as saying “the earth orbits the sun in an approximately elliptical orbit”), a visual representation (a diagram or animation depicting the sun and the orbiting earth), or an equation (using mathematical symbolism for objects, interactions, and actions).
More generally, Newtonian theory can be described with simple algebra (as in most introductory courses), or by also using calculus, or with a variety of advanced mathematical techniques such as Hamiltonians or tensor analysis; and each mathematical formulation can be supplemented by a variety of visual and verbal explanations, and illustrative examples.
Similarly, a theory of quantum mechanics can be formulated in two very different ways: as particle mechanics by using matrix algebra, or as wave mechanics by using wave equations.
Although two formulations of a theory may be logically equivalent, predicting the same results, differing representations will affect how the theory is perceived and used. There will be differences in the ease of translation into mental models (i.e. in ease of learning), in the types of mental models formed, and in approaches to problem solving.
Often, cognitive utility depends on problem-solving context. For example, an algebraic version of Newtonian physics may be the easiest way to solve a simple problem, while a Hamiltonian formulation will be more useful for solving a complex astronomy problem involving the mutually influenced motions of three celestial bodies. Or consider how an alternate representation — made by defining the mathematical terms “force x distance” and “mvv/2” as the verbal terms “work” and “energy” — allows the cognitive flexibility of being able to think in terms of an equation or a work-energy conversion, or both.
SIMPLIFICATION and COGNITION. If a theory is formulated at differ levels of simplification, these representations will differ in both logical content and cognitive utility. A more complete representation will (if the mind can cope with it) produce mental models that are more complete; and in some contexts these models will be more useful for solving problems. But in other contexts a simpler formulation may be more useful. For example, a simpler model may help to focus attention on those features of a system that are considered especially important.
In designing models that will be used by humans with limited cognitive capacities, there is a tension between the conflicting requirements of completeness and simplicity. It is easier for our minds to cope with a model that is simpler than the complex reality. But for models in which predicting or data processing is done by computers, there is a change in capacities for memory storage and computing speed, so the level and nature of optimally useful complexity will change. High-speed computers can allow the use of models — for numerical analysis of data, or for doing thought-experiment simulations (of weather, ecology, business,...) — that would be too complex and difficult if computations had to be done by a person.
A SYNTHESIS? Philosophy of science and cognitive psychology overlap in areas such as the structuring of scientific theories (studied by philosophers) and the structuring and construction of mental models (studied by psychologists). Research in this exciting area of synthesis is currently producing many insights that are helping us understand the process of thinking in science, and that will be useful for improving education.
COGNITIVE UTILITY and RESEARCH UTILITY — These two aspects of SCIENTIFIC UTILITY are related, because Cognitive Utility is important for promoting Research Utility. ..... [four small sub-sections about Research Utility – Acceptance & Pursuit, Relaxed Conceptual Standards, Utility in Generating Experiments, Testability – are not included in this page, but are in A Detailed Overview of Scientific Method ]
"In the late nineteenth century, natural selection and isolation were viewed as rival explanations for the origin of new species; the evolutionary synthesis showed that the two processes were compatible and could be combined to explain the splitting of one gene pool into two." (Darden, 1991, p. 269)
Of course, a declaration that “both factors contribute to speciation” is not the end of inquiry. Scientists can still analyze an evolutionary episode to determine the roles played by each factor. They also can debate the importance of each factor in long-term evolutionary scenarios involving many species. And there can be an effort to develop theories that more effectively combine these factors and their interactions.
A different type of coexistence occurs with Valence Bond theory and Molecular Orbital theory, which use different types of simplifying approximations in order to apply the core principles of quantum mechanics for describing the characteristics of molecules. Each approach has advantages, and the choice of a preferred theory depends on the situation: on the molecule being studied, and the objectives; the abilities, experience, and thinking styles of scientists; or the computing power available for numerical analyses. Or perhaps both theories can be used. In many ways they are complementary descriptions, as in "The Blind Men and the Elephant," with each theory providing a useful perspective.
This type of coexistence (where two theories provide two perspectives) contrasts with the coexistence of causal factors in speciation (where two theories propose two potential co-agents in causation) and [in an example not selected for this page] with the non-coexistence in oxidative phosphorylation (where one theory has vanquished its former competitors).
..... The structure of the Periodic Table, originally derived in the late 1800s by inductive analysis of empirical data for chemical reactivities, with no credible theoretical mechanism to explain it, was later derived from a few fundamental principles of quantum mechanics. Explaining the Periodic Table was not the original motivation for developing quantum theory; instead, it was a pleasant surprise that provided support for the newly developed theory. And because quantum mechanics also explained many other phenomena, over a wide range of domains, it has served as a powerful unifying theory.
CONSILIENCE WITH SIMPLICITY. The previous paragraph describes how we can derive the Periodic Table's structure in two very different ways, by chemical properties and by quantum mechanics. This is an example of consilience. Another perspective on consilience is viewing it as a way to define the size-and-variety of a theory's domain, in terms of the different “classes of facts” (not just the number of facts) explained by this theory. Making a useful estimate of consilience often requires sophisticated knowledge of a domain, because it requires categorizing raw data into classes, and judging the relative importance of these classes, and their differences.
Usually scientists want to increase the consilience of a theory, but this is less impressive when it is done by sacrificing simplicity. An extreme example of ad hoc revision was described earlier [but is not included in this page]; Theory T1 achieves consilience over a large domain by having an independent theory component for every data point in the domain. But defining a collection of unrelated components as “a theory” is not a way to construct a simple consilient theory, and scientists are not impressed by this type of pseudo-unification. There is too much room for wiggling and waffling, so each extra component is viewed as a new “fudge factor” tacked onto a weak theory.
By contrast, consider Newton's postulate that the same gravitational force, governed by the same principles, operates in such widely divergent systems as a falling apple and an orbiting moon. Newton's bold step, which achieved a huge increase in consilience without any decrease in simplicity, was viewed as an impressive unification.
Although “consilience with simplicity” can be a useful guideline, it should be used wisely. Simplicity is not the only virtue (and sometimes it is not a virtue at all), so the unique characteristics of each situation should be carefully considered when judging the value of an attempted unification.
A NARROWING OF DOMAINS. Sometimes, instead of seeking a wider scope, the best strategy is to decrease the size of the domain claimed for a theory.
For example, in 1900 when Mendel's theory of genetics was rediscovered, it was assumed that a theory of Mendelian Dominance applied to all traits for all organisms. But further experimentation showed that for some traits the predictions made by this theory were incorrect. Scientists resolved these anomalies, not by revising their theory, but by redefining its scope in order to place the troublesome observations outside the domain of Dominance. Their initial theory was thus modified into a sub-theory with a narrower scope, and other sub-theories were invented for parts of the original domain not adequately described by Dominance. Eventually, these sub-theories were combined to construct an overall mega-theory of genetics that, compared with the initial theory of dominance, had the same wide scope, with greater empirical adequacy but less simplicity.
Two types of coexistence were described earlier: when each competing theory (proposing natural selection or isolation) describes a causal factor, or when each theory-model (Valence Bond or Molecular Orbital) provides a useful perspective. A third type of coexistence, described in the paragraph above, is when sub-theories that are in competition (because they describe the same type of phenomena) “split up” the domain claimed by a mega-theory that contains both sub-theories as components; each sub-theory has its own sub-domain (consisting of those systems in which the sub-theory is valid) within the larger domain of the mega-theory.
Newtonian Physics is another theory whose initially wide domain (every system in the universe!) has been narrowed. This change occurred in two phases. In 1905 the theory of Special Relativity [which Einstein wanted to call his theory of Invariance] declared that Newton's theory is not valid for objects moving at high speed. And in 1925, quantum mechanics declared that it is not valid for objects with small mass, such as electrons. Each of these new theories could derive Newtonian Physics as a special case; within the domain where Newtonian Physics was approximately valid, its predictions were duplicated by special relativity (for slow objects) and by quantum mechanics (for high-mass objects). But the reverse was not true; special relativity and quantum mechanics could not be derived from Newton's theories, which made incorrect predictions for fast objects and low-mass objects.
Even though quantum mechanics is currently considered valid for all systems, it is self-limited in an interesting way. For some questions the theory's answer is that “I refuse to answer the question” or “the answer cannot be known.” But a response of “no comment” is better than answers that are confidently clear yet wrong, such as those offered by the earlier Bohr Model. Some of the non-answers offered by quantum mechanics imply that there are limits to human knowledge. This may be frustrating to some people, but if that is the way nature is, then it is better for scientists to admit this (in their theories) and to say “sorry, we don't know that and we probably never will.”