Home + Summaries - Site-Using Tips -   into right frame 
 

Conceptual Evaluation of Instruction Quality

This page is motivated by my concerns about the effects of exams that give teachers reasons to avoid "thinking skills" instructional activities.   And two humble disclaimers are necessary;  I'm not an expert in assessments of teachers (or students), so I'm eagerly awaiting feedback from others;  this kind of evaluation is not new, it's very “traditional” and is commonly used by most teachers when they evaluate the quality of teaching by themselves and their colleagues.
 

A Quick-and-Rough Summary

To reduce the educationally unproductive effects of NCLB-type exams on students, when these exams are used to evaluate teachers, instead (as part of an alternative policy for evaluation) we can observe the plans and performances of teachers:

Are their plans aligned with NGSS or other standards?  coordinated with the plans of other teachers?

What is the quality of performance when these plans are actualized in the classroom? (re: Quality Control)

 

This approach – using Conceptual Evaluation of Instruction to estimate its Quality – is based on assuming that:

• "people [including students] learn much of what they have a reasonable opportunity and motivation to learn," as proposed by David Perkins.

• most teachers are competent and conscientious, so they want to teach well.*  {By contrast, some policy makers seem to think teachers are incompetent or lazy, so they must be controlled with exam-based “accountability” standards.}

* Notice that I say "most teachers" instead of “all teachers,” because a worthy practical goal is “greatest good for the greatest number,” not a 100% success that is impossible to achieve.  Should we be concerned about teachers who are less competent and conscientious?  Yes, but...   Conceptual Evaluation might produce extremely valuable benefits — by decreasing the current distortion of curriculum that occurs when teachers & administrators ask “how can we get more points on the standardized exams?” rather than “how can we design education that will be most beneficial for students?” — and these benefits could be more than enough to compensate for letting a few inadequate teachers “slide by without numerical accountability” when we're trying to design education that will produce a "greatest good" for students.  And as part of this design project we can develop strategies to support the improving of inadequate teachers, and inadequate school environments.

 

Here are some reasons to use Conceptual Evaluations.

A Strategy:  Prominent educators in the U.S. are defining goals with Next Generation Science Standards (plus Common Core) for ideas-and-skills that will help students achieve their personal goals for life, and will help our country achieve societal goals for living well in a 21st century global context.  After we define goals for ideas-and-skills, we will design curriculum & instruction with learning activities to provide opportunities for experience with these ideas-and-skills, plus teaching strategies to help students learn more from their experiences.

This strategy is useful in principle, but there is...

 

A Problem:  We want to help students improve their ideas-and-skills but often there is a conflict of ideas-versus-skills.  These conflicts are increased by exams associated with No Child Left Behind (and similar policies) that encourage unproductive goals-for-education, with not enough emphasis on skills, when teachers (and schools) respond rationally by “teaching to the exams” that are used to evaluate the quality of teachers (and schools).

A Solution?  Conceptual Evaluation of Instruction* might be useful as part of a policy for evaluating teachers (and schools) in ways that will promote a more effective teaching of ideas-and-skills.

 

* Although I independently developed this approach for my PhD work — by using Integrated Scientific Method (my model of Science Process, for the ideas-and-skills used in a process of science) as the basis for a Conceptual Evaluation, for an Integrative Analysis of Instruction in an Inquiry Classroom — I realize that this type of analysis previously had been used by other educators.  This general use of the approach — with others developing similar ideas, which are continuing to be used now — should make it easier for the approach to be more effectively developed and widely adopted.

These ideas about teaching ideas-and-skills are described more thoroughly in the related page-summaries for K-12 Science Education StandardsCurriculum Design & AdoptionAn Ideas-and-Skills CurriculumA Coordinated Wide-Spiral Curriculum.

 

Here is a brief outline of the Conceptual Evaluation in which we observe the plans and performances of teachers.

Planning:  Do teachers, individually and in community, have a coherent plan for teaching ideas-and-skills to achieve the goals in NGSS and Common Core, with instruction (plus formative & summative assessments) that is coordinated in a wide spiral curriculum, across subjects and over time?  Is this strategy-plan consistent with accepted principles of teaching-and-learning?

Performance:  Are these plans being converted into effective teaching?  This question — about the Quality of Actualization when a teaching strategy is converted into action — is the basis for Quality Control with the goal of observing-and-improving performance.  Quality Control can be done using classroom observations (scheduled + unannounced?) and in other ways.

 

Below is a detailed description of these ideas.

 


 

Conceptual Evaluation of Instruction

The purpose of instructional evaluation is to estimate the extent to which a particular program of instruction achieves an educational objective, such as helping students improve their thinking skills.  Evaluations should provide feedback that is useful for developing better approaches to instruction, and for making policy decisions about instruction.

Of course, instructional development and policy decisions should be based on reliable knowledge, including data about instructional activities (what students are asked to do), student actions (what students actually do), and learning outcomes (what students learn).  Based on this data, an evaluation of instructional effectiveness can be mainly empirical or conceptual.

An empirical evaluation occurs by gathering and interpreting outcome-data in an effort to determine the effectiveness of a program.  Empirical evaluation can be very useful, but doing it well is usually difficult and time consuming.

A conceptual evaluation is based on activities-data about student activities-and-actions, about what students do during instruction.  By knowing what students do, we can predict what they are likely to learn, and how well.

Basically, described in terms of Design Process, these two evaluations occur in two kinds of Quality ChecksEmpirical Evaluation uses physical experiments that produce observations, while Conceptual Evaluation uses mental experiments that produce theory-based predictions.  Both kinds of evaluation can be used during curriculum design, to pursue our goals of effective education.

 

As an example of Conceptual Evaluation, consider an extreme case where the dual objectives of instruction are to help students learn about the nature of science and improve their thinking skills, yet the activities-data shows that there is no discussion of either science or thinking, and students have few opportunities to use creative-and-critical thinking.  Even with no outcome-data it's easy to predict that this program, due to the mismatch between objectives and activities, will not achieve its objectives.

Real-life situations are more complex, so often a Conceptual Evaluation is more difficult, its meaning is open to a wider range of interpretations, and its conclusions are justifiably viewed with caution.  And a conclusion may be indefinite.  For example, a conclusion of “beneficial but not sufficient” occurs when we claim to know that a particular instruction-condition is beneficial in helping a teacher achieve a better match between objectives and activities, so it probably will contribute to success, but we also think it is not sufficient because even if this condition is present there is no guarantee of success because other conditions also influence the outcome, and are needed for effective instruction.

But a good Conceptual Evaluation can be very useful for designing goal-directed curriculum & instruction and predicting its effectiveness in helping students achieve worthy educational goals, and for evaluating the effectiveness of teachers.

 

During educational design, Conceptual Evaluation should be based on a deep, accurate understanding of instruction.  This essential knowledge base can be improved by using a coherent analytical framework, such as an activity-and-experience table.  This could include the principles of Design Process, and thus the "Practices" of NGSS.

I think Design Process is useful for accurately describing the integrated structure of design-thinking process.  Therefore, it also might be useful for describing the integrated structure of instruction in which students use a process of design.   /   In fact, in the second half of my PhD dissertation I used a model of Science Process (which is part of Design Process) as the analytical framework for studying the structure of instruction in a creative science-inquiry classroom.   {The first half was developing and describing the model for Science Process.}

 

MORE:  Assessment, Achievement, and Accountability (by Karen Ostlund);  and ideas of David Perkins – 1 2