- Home
- DI

Monographs - Recent

Writings - Updated Socrates
- DI Articles
- English

Training Videos - DI Videos
- Non DI
- Zig Watercolors

Direct instruction has crystallized in two forms: an uppercased (UC) DI and a lowercased (lc) di. The uppercased version consists of published programs developed by Siegfried Engelmann and his coauthors. Lowercased di consists of procedures for providing instruction directly. Different designers have different rules for (lc) di, but their orientations have common features, and all designs share serious problems that stem from their assumptions.

*Lowercased di Preceded Uppercased DI*

Barak Rosenshine, a spokesperson for (lc) di, describes the emergence of (lc) di and (UC) DI in, “*Five Meanings of Direct Instruction*” (2008). He creates a timeline and scenario that are at odds with the facts. He points out that the term direct instruction was probably used first by Joseph Meyer Rice (1893). Rice didn’t use it as a specific type of instruction, but simply instruction, as opposed to “busy work.”

The earliest date Rosenshine provides for (lc) direct instruction as a specific type of instruction is 1968. “Beginning around 1968, researchers used direct instruction as a term for the instructional procedures used to teach higher level cognitive tasks.” (2008, p. 3). He indicates that (UC) Direct Instruction appeared nine years later. “Around 1977 the DISTAR developers began to use the term Direct Instruction to identify their program.”

Both statements are inaccurate and misleading. The first mention of lowercased direct instruction occurred in the 1966 publication, *Teaching Disadvantaged Children in the Preschool* by Bereiter and Engelmann, who wrote,

The direct-instruction approach ensures that every objective can at least be attended to, and it gives the teacher better day-to-day control over pupil progress so that she will know what objectives need additional attention

(p. 51)

The book was published in 1966, two years before Rosenshine claimed researchers used the term direct instruction to describe procedures for teaching higher level cognitive tasks. The issue of whether the preschool taught higher level cognitive tasks is strongly affirmed by the video *Kindergarteners Showing Off Their Math Skills*, which is on Zigsite.com and was filmed in 1966 (Engelmann, 1966). The six-year-old children in the film, who started as four-year-olds, began a direct instruction preschool program in 1964, which means that practices and programs based on direct-instruction principles were in place by 1964.

The results of the preschool were highly positive—IQ gains averaging 24 points and accelerated performance in reading, language, and math (Engelmann & Bereiter, 1967). The video on Zigsite.com shows the top group of at-risk preschoolers after they finished kindergarten. They work some math problems that most fourth grade students wouldn’t be able to work, including word problems that generate number problems like 4M = 20, which the children specify and solve by multiplying both sides by 1/4 and figuring out that the fraction 20/4 is 5.

They also work addition problems that require carrying: 38 + 14; area problems; and problems that children solve through logical progressions, for instance: R – 11 = 3. Children note that the problem subtracts. The difference is 3. Therefore, R is 3 more than 11, which is 14.

The children indicate whether fractions are more or less than one whole. For fractions like 18/9, children identify how many wholes the fractions equal. The children also work simple problems involving negative numbers: (7 – 9). They also factor expressions like 6A + 9B + 3C by first identifying the common factor, 3, then multiplying by 3, then dividing by 3 to arrive at the answer: 3 (2A + 3B + 1C).

The point is that nobody has provided more impressive demonstrations of (lc) direct instruction inducing cognitive strategies than this one. All aspects of the demonstration—the specific content, the interactions with the students, and the response conventions—were strictly (lc) di, not (UC) DI. As the math film shows, Engelmann used no script and did not follow many of the other conventions required by (UC) DI instructional programs.

A second large (lc) di implementation was the University of Illinois’ Upward Bound program, which began in 1966. Engelmann was the curriculum director and designed the (lc) di math and language programs (with Greta Hogan) for students from East St. Louis in grades 10–12, who were scheduled to go to college. The content for math and critical reading was replete with examples of teaching higher-order cognitive strategies.

Rosenshine’s assertion that the developers of DISTAR first used the term (UC) Direct Instruction around 1977 is an oxymoron because the first three letters in the name DISTAR refer to Direct Instruction System, which means that the name was in use when the programs came out. The first DISTAR programs came out in 1968 (and had been in use in Follow Through at least a year earlier); therefore, a conservative estimate is the developers used the term by 1968.

Further evidence of the fact that the developers linked the name DISTAR with (UC) Direct Instruction before 1977 comes from a 1972 two-page institutional ad on DISTAR that appeared in several leading periodicals, including LIFE. The ad (produced by IBM) shows a teacher and small group of children. Behind them, on the board, is the acronym DISTAR written vertically, with the translation for each letter. The top three lines are

D direct

I instruction

S system

It is quite a stretch to say that the developers didn’t use the term Direct Instruction until “around 1977.”

The proper timeline shows that Engelmann and colleagues used the lowercased term direct instruction first (before 1968) and used the uppercased (UC) term Direct Instruction first (before 1977). Furthermore, there is evidence that the work done as (lc) di produced exceptional results.

*Problems with (lc) di*

The fact that Engelmann and colleagues originated both (lc) di and (UC) DI raises the question of why they abandoned di. It unquestionably worked well: the teachers in the preschool could do it with great proficiency; the lessons were lively, and the results were very impressive. What possible reason could there have been for abandoning the approach in favor of one that is more confining, more mechanical, and possibly less enjoyable for the gifted teacher?

The answer is that the staff had to train new teachers. The process is best described as painful with (lc) di.*Teaching Needy Kids in Our Backward System*, (Engelmann,2007, pp. 15–21) describes specific problems that trainees had learning to teach (lc) di. The trainees had serious problems with all aspects of designing routines and teaching them—creating the example set, using proper wording for the examples, sequencing the examples, correcting specific mistakes, and revisiting specific skills. Solving most of these problems demanded a skill-set far beyond those of the new trainees.

One of the first trainees was Valerie Anderson, a very bright verbal young woman who later became the principle author of Open Court (a program that Rosenshine uses as an example of the prowess of di). For half of a school year, Valerie sat next to Engelmann and presented examples that he first modeled.

She had many presentation problems. On at least six occasions, she became so frustrated that she cried. When she began to generate and present math problems, her struggles with using consistent wording were confounded by her inability to generate proper examples. In other words, it was a perfect disaster, and it would often require considerable time to explain whether the problem she had was with the design of the instruction, the way she taught it, or, frequently, both.

It’s very difficult to present good examples and a good sequence if you don’t understand the instructional consequences of some problem types. Sometimes, students respond correctly to a problem that has a serious flaw. Let’s say you present the problem 2 + 2 = during initial teaching on addition and ask about the number you start with and the number you plus.

If children answer correctly, do they really know that the first 2 is the one you start with?

If the problem is 2 + 4, their responses are not ambiguous, but the problem generates unnecessary mistakes. In the preschool program, students solved the problem by starting with the first number and counting four places. Often students stop after they say “3, 4.” They stop because they confuse counting four places and counting until they say “four.” If the problem is 4 + 2, this mistake is obviated because children won’t say “two” as they count from 4: “5, 6.”.

The point is that the teaching and program design interact. If the design is weak (as it is when poor problems are presented) more-involved teaching is required to bring students to mastery, which means that children learn more slowly and receive feedback that they are not highly successful learners.

After six months of working with Engelmann every day—with three groups—Valerie became pretty good. She could correct efficiently; she could generate reasonable problems; she could present with reasonably efficient and consistent wording; she used good pacing; she reinforced well; and she had a pretty good sense of what needed to be reviewed. Her most important asset was her knowledge of how students perform when all the details are in place. With this knowledge, she could critique poor efforts and see how and why they had to change. With this knowledge, she could work on her own and continue to improve.

All the people the staff tried to train after Valerie had the same problems Valerie had. A classic example was one new teacher who was facing the children, holding up her right hand, and indicating, “This is my right hand.” and then directing the children to “hold up your right hand.” All the children held up their left hand, and she couldn’t figure out why. After about five minutes of painful instruction, Engelmann then stopped her and showed her an efficient way to teach *right*. He squeezed each child’s right hand and told them, “The hand I squeezed is your right hand. Hold up your right hand.” Everybody did it. He told them to turn around and hold up their right hand. Everybody did it.

The preschool tried to train more than a dozen teachers using the tutorial routine of the trainer modeling and progressively letting the trainee do part of the teaching. The teachers who did not receive training as extensive as Valerie’s did not do well.

The tutoring efforts suggested a far more efficient alternative—design instructional programs that obviated the various program-construction and review problems. The first step was to provide trainees with lists of examples they would present in order. The lists relieved trainees of designing proper problems and provided reasonable sequences of examples. Trainees didn’t have to worry about the fidelity of the examples or their sequencing, so they could concentrate on presenting the examples and on correcting mistakes children made. All, however, had serious wording problems and required extensive practice in first designing reasonable wording and then using it faithfully.

This was not an idle or arbitrary requirement. When teachers use inconsistent wording, children make far more mistakes than they do in response to example sets that use variations of the same wording. When the trainees didn’t use consistent wording, children tended to become confused, and the trainees typically tried to explain what children should be doing. This explanation introduced more variant wording and simply confused the children more.

When the training introduced scripts that not only presented examples but specified the wording and, in some cases, the correction procedure, there was a great reduction in the amount of time it took to train teachers. A conservative estimate is that it was possible to train them in less than one third the time.

Even though there were scripts, trainers still had to work extensively on presenting properly (pacing, pausing, signaling so children could respond together, reinforcing, and a host of related presentation techniques). The scripts were effective in the sense that they made it possible for trainees to focus on teaching, not on both teaching and instructional design, but the scripts supported a new poor-teaching behavior. Some trainees would initially present the scripted wording without attending to the responses the children made. Trainers reminded them that this is not “a reading.” This is teaching, and the trainee had to make sure that all the students were responding correctly and consistently.

Training new teachers took several months, but after teaching only a couple of weeks, a new trainee would be as effective as Valerie had been after five months.

In summary, the preschool abandoned di because to do it properly, trainees had to learn far more than they had to learn if they worked with a well-crafted script that provided examples in a reasonable sequence, with efficient consistent wording and a sufficient number of examples over three or more consecutive lessons to assure that properly placed students would learn the new concept or skill.

A footnote on designing example sets is that Engelmann taught design of instruction for more than 10 years at the University of Oregon. Much of the work required students to create sets of examples that met the conditions outlined in the text *Theory of Instruction* (Engelmann & Carnine, 1982). Most students had a lot of trouble applying the various criteria needed to create an effective sequence of examples. Given the difficulty advanced graduate students had, it would seem inappropriate to require teachers who have received no training in designing example sets to create them.

In 1968, Project Follow Through invited various sponsors to implement their models of instruction to teach at-risk children in grades K–3. The (UC) Direct Instruction model was included and outperformed all other models in everything measured. A little-known fact about Follow Through is that there were two (lc) di models that were not counted in the final tally because they dropped out before the Follow Through evaluation occurred in 1977. Engelmann greatly admired one of the (lc) di sponsors and had a close association with the other.

*Amherst Socratic Model*

Amherst University had a Follow Through model that was based on the Socratic method. The instructional routines involved using the children’s responses as the basis for what came next. The young professor who sponsored the model would start with a question like, “Why don’t we allow students to come to school when they feel like it?” She then asked questions that related to whatever the respondent said. She might ask something that would contradict the student’s response or that required more details. The sponsor was very good at applying the method and good at explaining why she did what she did.

She worked with only one disadvantaged school in Follow Through. After the first year, however, she resigned the model. She explained to Engelmann, “I just can’t train the teachers. I’ve tried and tried, but I just can’t do it.” He told her that he had similar experiences.

*The Gotkin Model *

The other (lc) di Follow Through sponsor who wasn’t around by 1977 was Larry Gotkin. He was enthusiastic, very knowledgeable about instruction, extremely hard working, and totally convinced that his two schools would be among the best. He spent most of his time at his schools, teaching in the classrooms, observing, and prompting teachers. After the second year, he became depressed because the data showed that his schools were far from exemplary. The reason: he couldn’t train teachers to implement his model effectively. Engelmann tried to talk him into working with the DI model, and he indicated that he might do that. Several weeks later, however, he committed suicide, and his model was discontinued.

Note that both (lc) di sponsors experienced the same problem—the inability to provide consistent, effective, and replicable teacher training.

The data clearly suggest that a major issue with (lc) di is whether teachers can be uniformly trained in a timely manner. We can obtain a perspective of this problem’s magnitude by looking at research on training some of the minor components teachers would have to learn. If these components are hard to teach, the problem is large.

*Praising Students *

A component of managing students effectively is how to present reinforcement effectively, and a component of reinforcing students effectively is praising students for performing well. Engelmann’s colleague Wes Becker did studies on praise that showed how difficult it was for teachers to follow the simple model of catching students in the act of being good and telling them what they did right (1968).

As noted earlier, Engelmann and colleagues worked with high school students from East St. Louis and their teachers as part of the project Upward Bound. The model was strictly (lc) di. The goal was to make the teachers effective in placing students appropriately and accelerating their academic skills.

Initially *none* of the Upward Bound teachers praised students. Some caught on to the techniques quickly and could see the difference in student performance. Engelmann worked with several teachers who did not catch on easily. He rehearsed and practiced extensively with the worst one for six or seven days before he could praise in mock-teaching sessions. And his first real attempt was tainted. The rule he was to follow during training was that when Engelmann gave him a visual signal he was to praise the student. In all earlier cases, he failed to respond and Engelmann had to step in and praise the student. On his first “successful” attempt, a student the teacher didn’t like had solved a new type of math problem we were teaching. Engelmann signaled, and the teacher said to the student, “Good work for a change.”

He was still a long way from saying something like, “Good work. You solved a very hard problem.”

Conservatively, before the teachers become highly effective they need to address a hundred details on the order of magnitude of learning to praise students properly. Teaching these details would obviously require extensive time and considerable one-on-one instruction.

*Good form vs. Good Learning*

Both teachers and students in Upward Bound shared a serious misconception that being a good student meant adopting the superficial behaviors of good students. When the project began, students took copious notes and wrote in a “round hand,” but the content of their notes was gibberish, and they were unable to use their notes to reconstruct what the teacher had said. Students were told that they were not to take notes but were simply to learn what was being taught.

The staff also discovered that students did not attend to directions but rather inferred or “psyched out” what they thought the teacher wanted them to do. The result was that the students had serious misunderstanding of what the learning game was all about. They assumed it was some abstract, general operation rather than specifically addressing concrete issues. The staff contradicted students’ misconception about directions by presenting arbitrary directions that students wouldn’t be able to figure out through speculation. For example, a worksheet would present items numbered 1–20. The directions might say:

Work the items in this order. First work item 7. Next work item 3. Then work the rest of the items starting with item 1.

The difficulties that students had in following these directions implied the extent of their mislearning. The teacher directed students to read the directions to themselves and raise their hands when they finished reading. Then the teacher directed them to read the directions again.

Then the teacher told them, “Follow those directions carefully when you work the items.” On the first example, virtually all the students worked item 1 first. The teacher stopped them and directed individual students to read the directions aloud, then asked, “Did you do that?”

The typical response was, “Do what?”

The answer: “Do what the directions told you to do.”

The typical response: “What did they tell me to do?”

The answer: “Read them again.”

After repeating this cycle one or more times, the student would say, “You mean you want me to work item 7 first?”

“Yes. If that’s what the directions tell you to do, that’s what you do.”

It took the students a long time to learn that their goal was to learn how to apply directions, rules, and formats to concrete, specific examples. What’s depressing is that inner-city students today have the same problem our students had.

Are individual teachers expected to figure out how to identify and correct these problems? Or do the problems persist because those who train teachers are not aware of them? It’s very possible that some who promote (lc) di have serious deficiencies in knowledge of what (lc) di teachers need to be taught before they can be effective. This possibility would be easily documented in a study that trains (lc) di teachers until they perform on a par with (UC) DI teachers who receive no more than five days of training.

*Rosenshine’s Rules. *

Rosenshine proffered rules that serve as guides for (lc) di teachers. Most rules raise serious questions about how to train teachers to successfully execute the rules. Here are the first five rules.

__*1. Begin a lesson with a short review of previous learning.__

The assumption is that the lesson is homogeneous and covers a single topic. Far more effective in most cases are lessons that have 4 or more independent exercises. All are reviewed at the beginning of the lesson, or more appropriately, immediately before each exercise in the current lesson.

__*2. Present new material in small steps with student practice after each step. __

Where and how do teachers learn how to do this? This is a program-construction issue that Larry Gotkin tortured over—trying to get teachers to analyze how many new skills their students would have to learn in response to different patterns of what is taught first and next. I would estimate that no more than 5% of the teachers have an intuitive sense of how to do this well. Most would do it poorly after months of daily instruction.

__*3. Limit the amount of material students receive at one time. __

This is a program-construction issue that requires some empirical information to determine how much is too much. It is unlikely that teachers would be in a position to create the material, test it, and modify it on the basis of student performance; therefore, it’s an ambitious and probably impractical goal. Note that rule 3 may not be consistent with rule 1. If teachers limit the material students receive at “one time” it is almost imperative that a 45-minute period would have to have smaller unrelated topics. The “review” at the beginning of the period would be of questionable value.

__*4. Give clear and detailed instructions and explanations.__

What are the tests and where do typical teachers learn this skill? As noted above, using consistent wording is one of the most serious deficits teachers have. How do they develop standardized routines without writing down the statements and creating something of a script to follow? What criteria do they use to determine that a particular routine is worthy of standardization?

__* 5. Ask a large number of questions and check for understanding.__

If this rule is to have a positive effect, teachers would have to receive considerable training in constructing questions that are good tests of student understanding and training in drawing proper inferences from the students’ responses.

In summary, these items involve program construction, and teachers are conspicuously weak in program construction.

These items and the facts about training teachers raise two ultimate questions:

1. Why haven’t educational researchers experimentally confirmed that (lc) di teachers are uniformly able to acquire these skills through training? (I know of no studies that have even attempted validation.)

2. Would the performance of these teachers be as effective as that of teachers who received far less training in using comparable (UC) DI programs?

I think the answers to these questions would resoundingly confirm that (lc) di is an elitist practice, not something that can be readily taught to average teachers, while teaching (UC) DI effectively can be uniformly taught to average teachers.

*References.*

Madsen C., Becker W, Thomas D. (1968) *Rules, Praise, and Ignoring: Elements of Elementary Classroom Control.* J Appl Behav Anal. 1968 Summer; 1(2): 139–150.

Bereiter, C. & Engelmann, S. (1966).*Teaching Disadvantaged Children in the Preschool*. Engelwood Cliffs, N.J.; Prentice-Hall, Inc.

Engelmann, S. (1966). *Kindergartners Showing Off Their Math Skills*, [Film]. Retrieved from zigsite.com.

Engelmann, S. (2007).*Teaching Needy Kids in Our Backward System: 42 Years of Trying.* Eugene, OR: ADI Press.

Engelmann, S., & Bereiter, C. (1967). An academically oriented preschool for disadvantaged children: Results from the initial experimental group. In D. W. Brison and W. Sullivan (Eds.), *Psychology and Early Childhood Education* (pp. 17–36). Toronto, Ontario Canada: Ontario Institute for Studies in Education.

Engelmann, S., & Carnine, D. (1982). *Theory of instruction: Principles and Applications.* New York: Irvington Publishers.

Rosenshine, B. (2008).Five Meanings of Direct Instruction.Lincoln, IL: Center on Innovation & Improvement. Retrieved from http://www.centerii.org/search/Resources%5CFiveDirectInstruct.pdf

Rosenshine, B., & Stevens, R. (1986). Teaching functions. In M. Wittrock (Ed.), *Handbook of Research on Teaching* (3rd ed.). New York: Macmillan.