1. Theories are "any effort to understand, to describe-and-explain (for yourself & others) what you have observed." Theories have practical value in many areas of life, not just in science, because "a wide range of useful-and-accurate theories, plus the ability to use your theories skillfully, will help you make accurate predictions that, along with your good values and priorities, will help you make wise decisions."
2. Theories are generated using Creative-and-Critical Thinking` by selecting (a known old theory), or inventing (a new theory) with Free Generation (when flexibility of thinking is encouraged by reducing restrictions on thinking) and Guided Generation (when creative thinking is stimulated-and-guided by critical thinking in evaluations that can include data analysis).
• Theories are designed by combining creative generation (above) with critical evaluation (below).
3. Theories are evaluated by comparing theory-based Predictions with reality-based Observations, in evaluative Reality Checks that "provide empirical feedback about how closely the way things are in your thinking (when you make Predictions using your Theories) matches the way things are in reality (when you make Observations). ... Based on this feedback, you can revise your theories or applications-of-theories, and improve the accuracy of your predictions and the wisdom of your decisions." We use Reality Checks (which show the empirical Degree of Agreement between predictions & observations) plus other evaluation factors (empirical Degree of Predictive Contrast, plus Conceptual Factors and Cultural-Personal Factors) to estimate a theory status that can range from very low to very high.
4. Predictions are made by "using if-then logic in one or more ways, with theory/model-based deduction or simulation ... or experience-based inductive extrapolation." { more about Making Predictions is in the section below }
5. Predictions are used in many ways, including a generation of theory-options (in #2 above) guided by evaluation of theory-options (in #3) that uses Predictions, in Reality Checks that are the focus of question-answering in Science. A similar process of Guided Generation also occurs to generate options for products, strategies, or activities, using Quality Checks that are the focus of problem-solving in General Design. In both Science and General Design an important activity is Designing Experiments by imagining different situations and making quick-and-rough Predictions by asking “what might happen?” These exploratory Mental Experiments let you consider a variety of Experimental Systems and evaluate them by asking “what could we learn, and could it be useful?” so you can decide if you want to carefully design and then run a corresponding Physical Experiment, or "invest more effort in Mental Experiments... so you can make improved Predictions that are more thorough, accurate, and precise."
Above, the introductory overview-summary describes how we design theories, and use theories to understand and to make predictions, in science & general design. Now we'll carefully examine #4, applying theory-based models to make predictions in two stages, when we MAKE Models and USE Models:
Theories and Models are similar, but usually Models are less general and more simplified: Models are less general than Theories, with a smaller domain of application, because we construct a Model by applying general Theories to a specific Experimental System, or a certain type of system.* During this process of constructing theory-based Models,* usually our Models become more simplified, compared with the Theory and with reality.
Composition-and-Operation: In science, typically we try to describe-and-explain “what is in a System, what is happening, and why” by constructing a Model that is a simplified representation of the System's composition (what its parts are, and how they are related) and operation (what the parts do, individually & together, plus a mechanism to describe and/or explain "why"), based on our Theories about the System.
Representations, External & Internal: Our external representations of a Model — the External Models (Visible Models, Audible Models, Mathematical Models,...) that we use for communicating ideas, and to support internal thinking — can take many forms (verbal, visual, mathematical, and/or physical), abstract or concrete, expressed in words (audible or visual), sketches, photos, diagrams, graphs, tables, equations, prototypes, or in other ways. Each of us personally constructs our own Mental Models that are internal representations of a Model.
* For any system, or type of system, many different models can be constructed by making different decisions about which aspects of a “total (i.e. nonsimplified) model” to include & exclude, what approximations to make, what representations to use, and so on. Different models can lead to differing predictions, in the next stage of application:
* A model can be applied for "a specific Experimental System [so you can make predictions for an experiment using this System] or a specific type of system" because the Model is more useful, compared with the Theory itself, for achieving specific objectives when it's applied to a specific type of system. For example, different models of chemical bonding (Valence Bond, VSEPR, Molecular Orbital,...) have been developed based on Quantum Mechanics (a general theory that is difficult to apply for most systems) with the objective of helping us construct useful representations (external & internal) for chemical bonding, and make useful predictions for a wide range of chemical systems.
We'll look at the three ways to make predictions that are listed in #4 – using inductive generalization, and theory/model-based deduction or simulation.
• Inductive Generalizations
We use experience-based inductive generalization by assuming that what happened before, in similar situations, will happen again.
During a generalizing, we can interpolate within a known domain (in which observations are known, or a theory is assumed to be valid), or extrapolate outside this domain. But even "within a domain" there will be some differences in some characteristics of the known and new situations, and outside a domain there will be some similarities. Therefore it can be useful to think about ranges-of-extrapolation for various characteristics.
In a time-generalization we simply extrapolate from past to future.
In an interpolative domain-generalization, we generalize from one situation to another very-similar situation.
An extrapolative domain-generalization is more complex and more risky, because in addition to assuming "past to future" similarities, we also extrapolate from a limited domain into a larger domain that includes a wider range of system-situations. We assume that known phenomena, already observed to occur in one domain, also will occur in another domain that is similar but not identical. For example, we might assume that educational results observed in a population of 7-year olds also will occur with 12-year olds, or that results in one school also will occur in another school; or that physiological results from rats also will occur in humans; or that sociological results in Madison also will occur in Milwaukee, Austin, or Ann Arbor. In each example we know that the domains are similar in some ways but different in other ways, so instead of simply predicting that “all results will be the same” we can predict how the results will be similar, and different, when moving from the already-observed domain to the new domain.
Interpolating and Extrapolating: For all if-then reasoning (for induction plus deduction & simulation) we can interpolate by making predictions inside the range of domains for which a theory (and its theory-based models) are assumed to be valid,* or extrapolate by predicting outside this range of domains. / * Or we can define interpolation as predictions for systems within a domain-of-systems for which we have experience because observations are available.
• Deductions or Simulations
Simple Deductions: Sometimes we can predict by using simple deductive logic, or its mathematical equivalent. If we drop a ball near the surface of earth, and our theory-based model for the experimental system (ball, earth, air,...) includes Newton's Laws of Motion in simplified form, predictions are easy; we just select one or more appropriate equations, and use the equation(s) to make calculation-predictions about the ball's motion.
Complex Deductions become Simulations: We can use the same simple theory, Newton's Laws of Motion, if we want to send a rocket to the moon. But now our predictions must be extremely accurate, which requires a complex theory-based model (of the complex experimental system, with several properties that continually change during the journey) and an immense number of calculations (done for many points in the journey, to predict the cumulative effects of the accelerations caused by forces). For practical purposes these calculations must be made using a computer that is programmed with information about the system's composition (earth, moon, air, rocket, fuel,..., plus mechanisms for adjustments of motion during the journey) and operation (described by equations based on physical principles, plus data from previous observations). This computational method of predicting what will happen — by constructing a theory-based model for the composition-and-operation of the experimental system, and then “running this model” — is called a simulation.
Deduction and/or Simulation: These two examples show that deduction & simulation are related, and vary along a continuum. At one end of the range a simple deduction is sufficient, at the other end a complex simulation is required, and in-between a choice is possible. Which model will be most useful? It depends on the system and your objectives.
Above I say that potential simulations vary along a "continuum" but they actually vary in many ways, with wide-ranging possibilities:
Quantitative Computational Simulations
These simulations are Mental Experiments that can be designed, for a wide variety of situations, to make predictions based on numerical models. Since the 1950s/1960s and the rise of faster computers with larger memories, the power of numerical modeling has increased dramatically. Scientists, engineers, and other designers now routinely take advantage of modern information technologies, using computational programs (generic or customized), databases, and spreadsheets.
Here are some principles to consider:
When calculations are done using a computer, the "mental" part of these Mental Experiments is writing the computer program that tells the computer what to do.
A major difference in situations is illustrated by comparing a moon rocket (a stable complex system with deterministic behaviors that are mathematically predictable) and weather forecasting (an unstable complex system with deterministic behaviors whose long-term predictability is limited, so its models use "chaos" theories!).
For most systems, a simulation-model must use some probabilistic mathematics: a system can involve behaviors – such as a “fair” dice roll, roulette spin, or card deal in a casino, or the gene shuffling described by Mendelian Genetics – that are not chaotic (like weather), yet in our predictions we must use probabilities; the semi-predictability of human choice is a key factor in simulation-predictions used to develop strategies for a corporation's business plans or a government's economic policies; we can predict probabilities for the occurrence of rare extreme natural events (extremely strong winds, powerful earthquakes,...) and for their probable effects on various types of tall buildings with differing construction techniques and materials; we can run simulations to make probabilistic predictions for the outcomes of sports events, political elections, and for many other situations.
Many simulation-models are hybrids, using a combination of deterministic math and probabilistic math. For each math-input into a model, and for the model's probabilistic predictions, we can have differing degrees of justifiable confidence. Interpreting the meaning of a probabilistic prediction is a philosophical question that has practical importance. / For a probabilistic math-input or for a probabilistic conclusion, our statistically estimated evaluations (for reliability & validity, precision & accuracy) can range from very high to very low.
When you make predictions "the quality of your predicting depends on the quality of your theory(s) and the quality when you apply the theory(s) to make predictions. Quality Control is especially important when a theory is used to construct a theory-based model for a simulation-Prediction, because this type of Theory Application can be especially difficult."
Although the term simulation typically means quantitative simulation, most mental experiments are done by “running the system” in a mental simulation, to imagine what will happen, to make a prediction that can be quantitative and/or qualitative.
Qualitative Conceptual Simulations
Mental Experiments can be used, as with the Quantitative Simulations above, to extrapolate known observations into the future or into new domains. Sometimes you are limited to “imagining” because an experimental system is too difficult (perhaps impossible) to run physically, or is too expensive. Mental Experiments (aka Thought Experiments) can be used:
in all areas of science, to think about principles; for example, historical examples in physics include Einstein's Photon Ride (for invariances that produce relativities), Maxwell's Demon (for the Second Law of Thermodynamics), or Schrodinger's Cat (for quantum physics, which does not support foolish mystical physics);
in other areas of life, to stimulate thinking about law (by posing “hypotheticals”), ethics & values, public policy, philosophy,...; in the classroom, as with Socratic Teaching, and outside.
Mental-and-Physical Simulations
During a Mental Experiment you can support your imagination with sensory input from a Physical Model of an Experimental System or (with a prototype, as in a scale model) a Solution-Option.
Using Prototypes to make Observations or Predictions: You can use a physical model in two ways, to make Observations or Predictions, or both. For example, you can “discover” a theory/model to explain astronomical phenomena by running a prototype,* a physical model that uses balls to represent the system of sun/earth/moon. By using a light for the sun, and moving the balls to different relative positions (as they would be at different times of the day, month, year) you can observe (the positions of sun & moon in the sky, phases of moon,...) in a Physical Experiment, although when you "observe" from different locations on earth, as it rotates, this will require some imagining in a Mental Experiment. Even without a light, you can predict in a Physical-and-Mental Simulation by imagining what you would observe (the positions, phases,...) from a particular location on earth.
* In a Prototype-Model, some things (but not all) are similar to the full-size real System. An important part of understanding is knowing which aspects of a model are and aren't intended to accurately represent an Experimental System, such as the real Solar System compared with balls & lights in a physical model, or a full-size Ferrari versus a scale model.
Of course, you also can use scale models in physical experiments (this is common practice for engineering), as in testing the aerodynamics of new shape-options for a large semi truck by using a much smaller scale-model of it (with a new shape) in a wind tunnel.
Here are some details: You can darken a room, turn on a lamp to be the sun, use one ball for the moon, and for the earth another ball with a marker (optional) to show your location; or let your head be the earth, and your eyes are your location. While you mimic the earth's day (24 hours) and moon's orbit (29.5 days from full moon to full moon) by rotating the earth and orbiting the moon around earth, observe the brightly lit part of the moon-ball. Based on your observing-and-thinking, you should be able to explain (by using retroductive logic while trying different earth-rotations) why the moon & sun appear to rise in the east and set in the west, why you see a change in phases of the moon, which side of a crescent moon (east or west, left or right) is illuminated before or after a full moon, and why there is a correlative-and-causal pattern relating moon phases and the associated times (relative to sunrises & sunsets) of moonrises & moonsets.
By moving the earth around the sun, which changes its tilt (what is it at each solstice, and each equinox?) if you treat it like a gyroscope, you can represent other phenomena. This model will help you explain why summer occurs differently in the northern & southern hemispheres, and why things look different (in what ways, and why?) when viewed from America and Australia.
When we ask "which theory is best?" or "which theory-based model is best?", the two main types of goal-criteria are accuracy and utility. We want a theory (or model) to be accurate, to correctly describe what is happening (or has happened, or will happen) in the world. And we want a theory (or model) to be useful.
There is always more than one way to make a Model of a System`, and often various models are useful in different ways. Therefore, deciding “which model is best, when all things are considered” will depend on the situation, plus your priorities (in a weighting of goals) when you want a model to achieve multiple goals.*
You'll find informative examples — asking "which model is best?" if we want to understand the motion of a bowling ball, tennis ball, moon rocket, cosmic muon, or expanding universe — in 5 Models Constructed from a Theory. It shows: why most scientists will prefer a different model for each situation; and why they might claim that a Theory (or Theory-based Model) is approximately correct (and define what they mean by "approximately"), and perhaps limit the domain of theory-application (re: type of Experimental Systems) for which this claim is made.
Quality Checks involving Reality Checks: When we define goal-criteria for Predictive Accuracy — tested in Reality Checks that will be used for Quality Checks to evaluate the Quality of a Theory — the desired “closeness of matching” can span a wide range, from low accuracy (with a very-rough match) to high accuracy (with very-exact matching). Maybe an approximate Predictive Accuracy will be “good enough” for our purposes, as in a low-stakes drop of a tennis ball where quick-and-rough “back of an envelope” mentally calculated estimates will be sufficient. Or maybe “good enough” requires much higher accuracy, as in a high-stakes project for a moon rocket that costs billions of dollars and risks the safety of passengers.
This "5 Models..." sub-section is in a page about Theories and Models that "shows some advantages & disadvantages [for evaluation criteria that include a model's cognitive utility and predictive accuracy, and research utility for stimulating-and-guiding theoretical or experimental research] of various ways to convert Theories into Models, and to define the domains of alternative Theories or theory-based Models." The cognitive utility of a theory (or model) is important because you want a theory/model to let you make internal representations and external representations that are useful for productive thinking.
* Although deciding "which model is most useful" may vary with the situation, scientists can develop a rationally justifiable confidence in the benefits of various models, regarding their plausibility (are they likely to be approximately true) and their utilities. Due to this justifiable confidence, we shouldn't get carried away with the foolish claims of radical postmodern relativism.
I.O.U. — This section (and "5 Models...") will be revised in the near future, maybe in June, as part of a major revision in the way I'm describing Science Process.
( Rational Responses to a Failed Reality Check )
I.O.U. — This section will be written sometime soon, maybe in late June. Here are some ideas that will be in it:
revising theories is an essential activity in science, and in other areas of life – with transfer into other areas of life whenever we ask “what is the evidence-and-logic supporting your claim?” / one application of this question is analysis of fallacies - this could be a fascinating activity that's an extension of scientific logic
or perhaps we should think about sci-logic as a special case of more widely applicable principles? either way, there is transfer and many options for good educational activities
if observations ≠ predictions, should Theory be rejected or revised? maybe.
we should check each factor going into Predictions, and ask "is it OK or should it be revised?"
some of these factors are: theory + supplementary theories (e.g. your theories about Experimental System, as in revising a theory about planets) and actualization strategies for how to use "Theory + System Theory" to make Theory-based Model, and apply it to make Predictions
maybe you can adjust claims for domain (make domain narrower so the failed Theory/Model applies to fewer systems in a domain, but not the one(s) for which it failed the Reality Check
note: these factors are explained in part of my Sci-Method pages (summaries from my PhD work) so I'll condense them here, and make links to other relevant sections & subsections, beginning with a brief summary.
we also can ask analogous critical-thinking questions about Observations -- maybe they're wrong, and Predictions are OK? this is a possibility to consider
what should you do when someone says, responding to an observation that isn't consistent with a claim they have made, "that's the exception that proves the rule"
I'll criticize this ludicrous statement briefly here, then will examine it more carefully in another page / it's a personal pet peeve, and I would support a campaign to get this disgusting phrase (an irrational response to a failed Reality Check) out of our language.
I'll write a story-scenario with a claim & failed reality check --> claimer's defense that this is "the exception that proves the rule."
a critical thinking response -- if you want to be confrontational, instead of just silently thinking less of the speaker, ask them to "please explain" and try to discover exactly what they mean — are they actually serious, and think they have said something logical or clever? are they trying to "change the subject" and avoid a conclusion that they're wrong? is it an attempt at humor? of all the possible reasons, precisely why did they say this foolish thing?
then show why they're wrong (in almost 100% of actual cases)* -- if it's seriously intended to be an actual logical defense, it's the logical equivalent (for a strong argument) of saying "your mother wears combat boots"
exception may not prove falsity, but it's certainly evidence against, not (as implied by this phrase) a support for it. a failed reality check should lead to revision of theory or admit its weakness (because it only applies for part of domain, or there are other factors that cause the exception, etc, as in the first paragraph of section)
* links -- the two main explanations for what the phrase originally meant (if we assume that it had any rationality at all) are briefly summarized and described in more detail and both show why using this phrase as a defense against a failed Reality Check is illogical and (if listeners are thinking logically) futile.
Precise Definitions: To improve the clarity of communication in science & education, all terms should be precisely defined, with one clear meaning. As one strategy for improving communication and reducing confusion, I recommend Writing a Glossary for NGSS because this "will help to avoid problems that could occur if the new science standards are interpreted and used in ways that are too loose or too rigid."
a claim and confession: For important terms (theory, model, hypothesis, prediction) there are many definitions, and many of the relationships between terms are complex and sophisticated. I try to be consistent in using terms, but occasionally I fail, and so do other authors.
Unfortunately, this ambiguity causes confusion, when terms have multiple definitions. The problem is described in a web-page that was a section in my dissertation-section, with a title that explains its objective: Coping with Confusion in Terminology. A useful principle is to use...
Gracious Interpretations: Sometimes an author will use important terms inconsistently. And different authors may use terms in different ways. A useful strategy for coping with confusion is to be gracious and practical. If you see a term being used in a way you don't expect, instead of just thinking "this is wrong," look at the context to determine how an author is using a term, and “think with this meaning” to interpret what the author is writing, to understand the intended meaning.
In my descriptions of science & design, beginning in my PhD work, I often use definitions recommended by Ronald Giere. He defines a hypothesis as a claim that a System-Model and the actual System are similar (in some ways, to some degree of accuracy) so the System-Model accurately describes the System and is a foundation for explaining its behavior; thus, hypothesis and model are closely related but are not identical. My use of hypothesis (it's a proposed explanation) is consistent with the most common definitions used by scholars (including educators), as in the Framework for Science Education (which says "it is a plausible explanation for an observed phenomenon"; they also describe models) for the Next Generation Science Standards. Some authors treat hypothesis and prediction as synonyms, which is unusual and it causes confusion, but... in some ways a hypothesis can be similar to a prediction, as in a statement (that I think is OK) from chemistry.about.com that "you might hypothesize that cleaning effectiveness is not affected by which detergent you use."
You can see my current efforts at being clear and precise when defining-and-explaining terms, in Theory, Model, Hypothesis, Explanation or in a shorter summary. For example, I describe how we make predictions by constructing a Theory-based Model (of an Experimental System) which becomes a Hypothesis that we use for making Predictions to use in an evaluative Reality Check with Hypothetico-Deductive Logic.
Important terms are commonly used in various ways by different authors, or even by the same author. Here are some of the ways:
Theory
Scientists (and others) often define a theory as an explanation that has strong support so it's highly plausible, and has wide scope. Should we use these two restrictions? Maybe.
Support: Generally a "theory" has strong support. But we discuss Ptolemaic Theory (aka Ptolemaic Model, Ptolemaic System) despite its low scientific support. And some non-scientists declare that “it's just a theory” to argue that a particular theory is “not a fact” (here, they correctly distinguish between theory and fact) and (by using theory in a non-scientific way) that it therefore can have low support and low plausibility.
Scope: It can be useful to define an explanation as a theory without demanding that it must have a wide scope.
My initial model of Science Process defines a theory as "a humanly constructed representation intended to describe and/or explain the observed phenomena in a specified domain of nature," with no restrictions on support or scope. Sometimes this non-restrictive general definition also is used in Design Process, especially when defining theory as one kind of objective (along with product, activity, or strategy) as an objective of design. In all areas of life, including Science, we try to generate (by selection or invention) a theory whenever we "want to know ‘what is happening and why?’, to describe-and-explain" in any situation. {educational benefits of a non-restrictive definition}
Model
As described above, when explaining "how we Make Predictions by...", I'm trying to use Precise Definitions. But in other places, I use the same term "in several ways," and so do other authors. For example,
I.O.U. — The rest of this section needs developing-and-revising. Maybe I'll do this in January.
In the new standards for science education (NGSS), [[ here I'll describe the MANY ways "model" is used, with a wide variety of meanings ]]
Theories and Models are similar, but usually Models are less general and more simplified: Models are less general than Theories, with a smaller domain of application, because we construct a Model by applying general Theories to a specific Experimental System, or a certain type of system.* During this process of constructing theory-based Models,* usually our Models become more simplified, compared with the Theory and with reality.
Even in this website, the term "model" has several meanings: as
common in science -- defined the way I use it — a System-Model is constructed by applying a theory to a specific System — but it also can be used in other ways.
I.O.U. — In late June, this section and creative Analytical Thinking` will be developed more thoroughly, to describe two of the Scientific & Engineering Practices in NGSS and how we can help students improve these skills.
In science (and when science is used during general design), predictions are compared with observations to critically evaluate theory-based explanatory models, which provides feedback for creatively generating theories by guided generation. Creative analyses of experimental data (predictions or observations) can help a scientist/designer do both of these related design-functions, evaluation and generation, for a model (in science) or (in general design) for a product, activity, or strategy.
Representations of a theory-based model can be "verbal, visual, mathematical, and/or physical, expressed in words, sketches, photos, diagrams, graphs, tables, equations, prototypes, or in other ways." When data is analyzed in some of these ways — for example, by organizing it in a table, or graphing it (on paper or with a computer program) — this can help you think in other ways about the data, and how you can use it to generate & evaluate options for theory-based models. One way to help students develop various types of ideas-and-skills (mathematical, visual-spatial,...) is a simple instructional activity in Data Analysis by Finding Patterns and by Graphing.
Predictions with Probabilities — Statistical Analysis of Observations
The logical/mathematical techniques of probability (using a theory-based model to make probabilistic predictions) and statistics (analyzing observations to detect patterns and infer a theory/model) are closely related. They use inductive logic and retroductive logic, respectively, with reversed questions: probability asks, in future tense, “This is the theory, so what will be the observations?”, while statistics asks, in past tense, “These were the observations, so what could be the theory?”
With probabilistic predictions or statistical observations, similar analytical methods and thinking skills are useful, so I'll use the term “statistical analysis” for both.
When a theory makes probabilistic predictions, theory evaluation is challenging (logically, mathematically, scientifically) and doing it well requires special skills. For example, with a system where Mendelian Genetics predicts that a recessive trait will appear in 25% of offspring, what is the Degree of Agreement (between predictions & observations) if recessive traits are observed in 1 of 4, or 2 of 10, or 200 of 1000, or 245 of 1000? And in each case, how strong is the support or non-support for the theory-based model?
Statistical analysis is useful for evaluating many types of Experimental Design, such as the design of simulations to make probabilistic predictions, and (for Mental Experiments or Physical Experiments) when you ask “is the experimental group large enough, and does it accurately represent the whole population?” or “what are the sources of random errors & systematic errors, and their effects?”
Doing statistical analysis well requires skills (conceptual & procedural, verbal, visual, mathematical,* both intuitive & technical) that are useful in all types of design, for science and general design. Statistical competence is important, in everyday life and in all professions, for “risk-and-reward analysis” and in many other ways, so educators should place a high priority on helping students develop ideas-and-skills (as in this overview of statistical concepts) for “thinking with statistics” (intuitively and technically) with or without a computer. The learning of statistics should be fascinating for students, due to the wide variety of instructional possibilities (for questions, illustrative examples, interpretations, and activities) that can be generated by a teacher who is diligent (to find) and/or creative (to invent), and who uses appropriate sequencing & pacing.
* To communicate statistical concepts and claims, different modes-of-representation (verbal, visual, and mathematical, in graphs, diagrams,...) are used in various combinations, in a wide variety of ways. Students can observe how these modes are used, learn how to understand each mode (individually and in combinations) and how to translate ideas from one mode into another, and explore possibilities for thinking more productively.
Thinking is examined most deeply in Creative-and-Critical Productive Thinking`. It begins with a summary: "When you effectively combine creative thinking and critical thinking, plus knowledge-of-ideas, the result is productive thinking." Strategies for generating ideas (by Revision & Innovation, using Guided Generation & Free Generation, individually and together) are examined throughout the page.
Empathetic Thinking: One of the most important aspects of productive thinking is the ability (and willingness) to view things from the perspectives of important stakeholders in a design project — everyone who will be affected, who has a stake in the outcome, especially the end-users but also others — by Thinking with Empathy. During design, empathetic thinking (which is sort of an external metacognition about the thinking-and-actions of others) supplements the egocentric thinking, from your own viewpoint, that is natural and easy to do.
Organizing and Remembering: When you have logically organized knowledge in your memory it's easier to think more effectively. For high-quality thinking, memory is not sufficient, but it is necessary. Ideas must be mentally available for quick retrieval-and-application, so you can hold ideas in your working memory and actively process the ideas in creative-and-critical thinking.
Cognitive Utility: Because your theories about “the way things are” are used when you think, a theory's cognitive utility (its usefulness for helping you think productively) is important, and is examined below.
Actualizing a Theory: Before you can "think with a theory" you must personally actualize it` by forming a clear mental model, so you can use the theory to understand and to make predictions. If a theory lets you do these two things easily and well, it's more useful for your cognition. Most people also want a theory that helps them think correctly (in a way that corresponds to “the way things really are” in reality) so they will understand accurately and make accurate predictions.
Earlier parts of this page (especially beginning with #4 and continuing into this section) describe how you typically use a theory for thinking. You try to understand a system by constructing a theory-based model of the system's composition (what it is) and operation (what it does). Your external representations of the model, which help you think and communicate, "can be verbal, visual, mathematical, and/or physical, expressed in words, diagrams, graphs, tables, equations." You decide how to combine these representations (using some or all, emphasizing each in the way you want, to form a personally customized blend) when you “educate yourself” by building the representations into your internal mental models, to improve the quality of your understanding-and-predicting.
A theory can be used as the basis for developing different models that each are useful in different ways, as explained briefly earlier in this page and more thoroughly in another page with examples that "show some advantages & disadvantages" for cognitive utility.