Logic of English Preliminary School Data

Since we published Essentials in 2012 and Foundations in 2013, schools have begun to implement Logic of English curriculum in every elementary grade as well as some middle school and high school classes. These have included private, public, charter, classical, religious, and university model schools from across the US.

Initial Findings

The following preliminary findings are from public schools using Logic of English curriculum in one or more grades. Results are from standardized testing reports that the schools have shared with us.

Please interpret the data with caution. None of these pilot programs conducted a formal research study, and a variety of factors - including the size of the data pool and the length of time - limit the reliability and significance of the data. These standardized testing results should be considered anecdotal evidence rather than the basis for statistically valid inferences. As more schools begin larger-scale implementation of Logic of English Foundations and Essentials, we are gradually amassing student reading data to give other schools insight into the impact Logic of English curriculum is having on their students’ reading skills. (Interested in sharing data from your classroom? Contact us!)

We are excited to see the gains these children have made thus far, and we share this information here in order to give you preliminary understanding of the effectiveness of Logic of English curriculum in helping all students learn to read successfully.


Pilot 1: Public Charter School, Minnesota - 2013-2014

Logic of English Foundations

Pool size: 67 kindergarten students (LOE pilot, 2013-2014)
Comparison pool size: 72 kindergarten students (previous class, 2012-2013)

This charter elementary school piloted Logic of English Foundations in Kindergarten and 1st in 2013-2014. To assess student progress, they used DIBELS -- a standardized test designed to assess students’ ability levels in a variety of early literacy skills in order to determine appropriate levels of support -- as a progress monitor. The data presented here are from their kindergarten classes.

The KCS kindergarten teachers had strong previous experience in systematic, evidence-based reading instruction, and several of them were using elements of the Logic of English approach prior to the pilot year. Student test scores indicated successful, effective reading instruction. However, DIBELS reports suggest even further improvement with Logic of English. Based on the success of the pilot, the school began implementing Logic of English curriculum across the grades in 2014-2015.


Dramatic Growth in Phonemic Awareness

There was particularly significant growth in the kindergarteners' phonemic awareness scores.

Students who can identify, isolate, and manipulate individual sounds within words are much better equipped to master the relationship between those sounds and how we represent them in writing. The ability to segment words into individual sounds, which DIBELS assesses through the Phoneme Segmentation subtest, is a key phonemic awareness skill.

In 2012-2013, KCS kindergarten students scored fairly evenly across the national percentile bands. In other words, the number of below average, average, and above average scores was typical for kindergarteners taking the mid-year DIBELS test nation-wide.


Logic of English Foundations lessons place a strong emphasis on the development of phonemic awareness, since it is vital for success in reading and spelling.

The impact was substantial: on the 2013-2014 mid-year Phoneme Segmentation sub-test, 37 of the kindergarteners - over half of the class - scored in the top 95% nationally! The number of students below the 50th percentile dropped dramatically from the previous year, and not a single kindergarten student scored below the 20th percentile. (Graphs include all scores for kindergarten students who completed DIBELS testing, including special education students receiving reading instruction in the mainstream classroom.) Preliminarily, these scores suggest that students at a variety of ability levels made gains in phonemic awareness as they progressed through Foundations.


Skillful Decoding: DIBELS Whole Words Read

Starting mid-year, the kindergarten students' DIBELS test included the Nonsense Word Fluency subtests, which measure students' ability to apply knowledge of letters' sounds to read unfamiliar words. This test is an indicator of students' understanding of phonics (letter-sound relationships), ability to blend sounds into words, and decoding skills. One of the key NWF measures is Whole Words Read.

Nationally, 75% of kindergarteners achieve a score of zero on this test mid-year. KCS scores from the previous year were fairly typical, though with higher percentage than usual scoring in the top 10%. (Since zero is the 75th percentile mid-year kindergarten score, the 75th percentile is the lowest point of comparison available for the mid-year test.)

During the Logic of English pilot, the majority of the KCS kindergarteners scored in the top 25% nationally, earning a score of 1 or above, and 36% of them scored in the top 10% nationally. Only 27% scored zero, the score achieved by 75% of their peers nation-wide on the mid-year test.


Continued Reading Progress: winter to spring

For the spring test date, the median (50th percentile) kindergarten Whole Words Read score is 1. Forty-nine percent of kindergarteners nationally still score zero at this point in the year (see National Percentile Ranges chart).

The KCS Kindergarten students continued to improve and earned well above-average Whole Words Read scores in the spring of the pilot. While their peers in the national pool partially closed the gap, the students in the pilot continued to make exceptional gains in decoding: 73% scored in the top 25% nationally, 97% of them scored in the top half nationally, and only 3% earned a score of zero. Their mean Whole Words Read score rose from 7 points in the winter to 14 in the spring.


Pilot 2: South Carolina Summer Reading Camps 2014

Logic of English Essentials

Pool size: 12 students (Logic of English pilot site); 353 students (all 2014 SC summer camp sites)

In 2014, thirteen South Carolina school districts ran summer reading camps for 3rd grade students not reading at grade level. These camps were offered in accordance with the state’s new Read to Succeed legislation. One district ran a Logic of English Essentials pilot at two sites, using a “balanced literacy” approach at a third site in the district as a control.

For their summer reading camp, the two Logic of English pilot sites used Logic of English Essentials as well as Rhythm of Handwriting Cursive. In addition to teaching phonemic awareness, systematic phonics, spelling, and vocabulary development with Essentials, they used our optional Essentials Reader as a reading comprehension supplement. The camp was six weeks in duration, with three to four instructional days per week. The students at the two Logic of English sites grew by nearly six-tenths of a grade level over the six weeks of camp -- 19% more than the benchmark set by the EOC and more than 20% more than the state-wide mean.

The SC Education Oversight Committee completed a detailed study of the various summer reading camp pilot programs and their students' growth in reading. The data presented here is compiled from assessments given by the Literacy Coordinators in the district piloting Logic of English and from the EOC's report on all the reading camp sites. The Logic of English district data includes results for all LOE pilot students who attended at least half of the camp dates and completed both initial and ending reading assessments. An additional 145 students in kindergarten through second grade who also attended SC summer reading camps are not included in this comparison.

Post-Camp: School-Year MAP Scores

Curious to see how the camp impacted students' long-term success in reading, we followed up with the pilot students' schools to ask about their progress. We received scores for seven students who had attended at least half of the camp and had spring (before), fall (after), and winter (ongoing progress) MAP RIT scores available for comparison.

These scores demonstrated noticeable gains in reading when the students returned to school. They jumped, on average, from the 7th to the 14th percentile* in reading for their grade on the respective test dates, and they gained an average of 6.4 RIT points (a gain of 0 to 1 point from spring of 3rd to fall of 4th grade is typical). Interestingly, the same students continued to make slightly above average progress during the school year after camp, gaining an average of 5.9 additional points between the fall and winter MAP tests. A 3-5 point gain is typical for this time span. With these gains they jumped an additional 3 percentile points fall to winter; their mean score rose from the 14th percentile to the 17th. From the spring before camp to the subsequent winter, the mean RIT growth for these seven students was 10.75 points; median growth 9.5 points; and the minimum and maximum growth 4 and 30 points.

*Percentiles are based on NWEA's 2011 national norms for each grade level and test.


Pilot 3: Denver-Area Charter School 2014-2015

Logic of English Foundations

Pool size: 88 students

Lotus School for Excellence, a STEAM charter school outside Denver with a high percentage of ELL and low-income students, piloted Logic of English Foundations in Kindergarten and First Grade in 2014-2015, as well as in a pullout Reading Intervention (RI) group for nine second graders.

This school used NWEA’s MAP test to assess reading and administered the test in the fall, winter, and spring. On MAP tests, each student earns a reading RIT, or Rasch Unit, a grade-independent raw reading score. This score is designed to indicate his or her reading ability as measured on a particular test date. MAP reporting focuses on growth: each student is assigned a projected RIT score based on average (nationally normed) growth from his or her previous score, and NWEA reports indicate whether each student met his or her projected growth target, how many of the students in the class met their growth targets, and what percent of their collective projected growth the class attained. This reporting system is designed to allow schools to focus on whether students are making appropriate gains in reading rather than simply comparing them to an average reading level. For more information, see NWEA.

NWEA Projected Growth calculations, calculated for each student, are based on median growth nationally from that student's starting score. By definition, an average of 50% of the students in a class meet their RIT growth targets and on average a class as a whole achieves 100% of its projected RIT growth. The data pool includes 88 Lotus students who tested on all three dates: 33 kindergarteners, 46 first graders, and 9 second graders.

Class-Wide Improvement

One key NWEA indicator is the percentage of students in each class who meet or exceed their projected growth -- who, in other words, make above average gains from their starting point compared with their peers nationally. This figure measures how many of the students are making strong gains. An astonishingly high percentage of the LOTUS students met or exceeded their projected fall to spring growth.

It is interesting to note that while the mid-year gains were impressive, the number of students meeting their projections increased with time. This was especially true with the struggling readers in the 2nd grade using Logic of English as a Reading Intervention. Mid-year, 73% of kindergarteners, 70% of first graders, and 56% of the second grade RI students (5 of 9) met their fall-to-winter growth projection. By spring, however, all these numbers were up (see chart). In the second grade RI group, all nine of the students exceeded their yearlong growth goals!

This gradually increasing growth fits with what we would expect to see from Logic of English curriculum. Because the lessons take the time to build a strong foundation, strengthen and reinforce basic phonemic awareness skills and phonics knowledge, foster critical thinking about how language works, and develop a powerful and linguistically accurate tool kit that equips students to read any word, it is typical for growth to begin slowly, as students lay a strong foundation, and then increase. The result is a much more promising long-term trajectory than one in which students make initial jumps and then stall out when they get past words they have memorized and begin to encounter others that they do not have tools to understand.


Above-Average Gains

Another significant class-wide measure is the percent of projected (target) growth met or exceeded. To find this, NWEA calculates the RIT points gained by all the students in a class or grade and divides it by their projected (statistically average) gains for that time period. This figure measures how much students are improving in reading. Since projections are based on nationally normed averages, 100% is the average classwide figure for this measure and indicates average growth. Figures above 100% indicate above-average gains in reading.

It is important to keep in mind that these NWEA percentages compare student progress to average growth. So a student with below-average gains might still have made substantial progress. On average, 50% of a class will make gains smaller than their projected growth, and a class as a whole will meet about 100% of its projected (nationally average) growth.

In the Lotus pilot, all 88 students made gains in reading as measured by their MAP RIT score; the minimum fall-to-spring gain was 5 points. And as a group they made gains substantially above the national average, from 38% more than average in the 2nd grade group to 67% more in kindergarten.


Real Progress for Struggling and Strong Readers

While the Percent Meeting Target and Percent of Projected Growth Met figures provided by NWEA indicated a high percentage of students succeeding and the classes as a whole making above-average gains, we were curious whether Foundations was benefiting students across a range of language and ability levels. To explore this question, we analyzed the RIT growth at the 25th and 75th percentile marks for each Lotus class (K, 1, 2nd RI group).

The result: We found that while some made greater gains than others, in every grade the students at the 25th and 75th percentile studying Foundations made progress beyond the average growth of their peers nationally. In kindergarten, students at the 25th percentile even exceeded the projected end-of-year score for their classmates at the 75th percentile. While a number of the students remained below the NWEA mean (provided for comparison) at the end of the year, they made consistent and impressive progress towards it, and many surpassed it! In each group the pilot students were further ahead of their projected growth in the spring than they had been in the fall. Rather than falling further behind, the struggling readers are catching up.


Overall findings: Lotus LOE Pilot

As a whole, the Lotus Logic of English pilot students made 50% more progress in reading than they were statistically likely to: compared to an average projected growth of 16.1 from fall to spring, they gained an average of 24.8 points (see chart at the top of this page). The Standard of Error NWEA reported for the students' spring scores ranged from 2.8 to 3.3.

NWEA calculates a Growth Index for each student, the actual growth minus the projected growth; a Growth Index of 0 is average, and higher numbers are better. The Growth Index was the figure we analyzed to assess whether above-average growth was distributed widely or concentrated in smaller number of high-achieving students.

To look at how this point gain was spread across the students, we examined what percentage of the Lotus pilot students had below-average, average, and above-average Growth Indexes. This gave us a more detailed view into the information provided by NWEA's Percent of Projection Met or Exceeded calculations. Again, the growth projections are generated depending on each student's grade and starting score; for the Lotus kindergarten through second graders in the pilot the yearlong growth projections ranged from 14 to 18 points.

The result:

We found that a small percentage of students made significantly below-average growth (all of these students, however, made RIT gains of at least 5 points; there was no negative growth). A significant minority, 14%, achieved scores within one point above or below their projection. A similar number made gains 2 to 7 points beyond their projected growth.

However, the majority of the Lotus pilot students had a Growth Index of 8 or above. In other words, their spring scores were eight or more points higher than the median score for their peers with the same fall starting point.


Pilot 4: Public Elementary School, South Carolina - 2014-2015

Logic of English Essentials, Rhythm of Handwriting and Foundations

Pool size: 271 students (grades 1-5)
Previous year (control): 229 students (grades 1-5)

Estill Elementary School decided to pilot Logic of English as part of their mission to bring success in reading to every student. They taught Logic of English in each grade K-5 in 2014-2015 and plan to continue using the curriculum in 2015-2016. The school used NWEA’s MAP test as a reading assessment for grades 1-5.

Context, data pool, and assessment

The school’s location in a district with a history of high poverty and low literacy presents particular challenges as the teachers seek to equip their students for success in reading. In order to get a better sense of the impact of the Logic of English curriculum within this context, we compared students’ reading gains not only to national norms calculated by NWEA but to the school’s own spring to spring gains the year before they implemented Logic of English, comparing each grade level with the previous class’s progress in that grade. We examined student progress in reading over a full calendar year, comparing Spring to Spring growth for 2013-2014 with Spring to Spring before and at the end of the Logic of English pilot, 2014-2015. The Spring 2014 test was an effective starting point for the LOE pilot study because it took place before the students had begun to use any Logic of English curriculum (as opposed to the Fall 2014 test, which followed a month of Logic of English instruction).

The pilot data pool includes the 271 students who tested in spring 2014 (at the end of the previous grade) and spring 2015 (at the end of the LOE pilot): 76 first graders, 52 seconds graders, 51 third graders, 54 fourth graders, and 38 fifth graders.
The previous year data pool includes 229 students who took the MAP test in spring 2013 (at the end of the previous grade) and spring 2014: 54 first graders, 50 second graders, 50 third graders, 41 fourth graders, and 34 fifth graders.

A note about the data pools: Because each year-long data pool considers only students with beginning and ending scores for that period and there were changes in enrollment between spring 2013 and spring 2015, there are discrepancies between the Spring 2014 mean scores figures for each grade for the 2013-14 and 2014-15 pools. The graphs reflect these changes in the student pool.
RIT norms note: National RIT norms for each grade are from the NWEA’s preliminary norms for 2015, with the exception of the spring 2013 (end of kindergarten) norm. NWEA had not yet published 2015 kindergarten norms at the time of this writing, and NWEA’s norm from 2011 (the previous norming study) is used instead. First through fifth grade preliminary 2015 norms differ from 2011 by no more than one point.

Placement

Since students were new to the program this year and teachers felt they would benefit from a strong review of the basics, first grade completed Logic of English Foundations A and B. Second grade students completed Foundations A, B, and part of C.
Third, fourth, and fifth grade students used our Cursive Rhythm of Handwriting program and Logic of English Essentials. The number of lessons covered varied by class, but most students completed approximately the first half of Essentials.

Findings

Students made measurably more progress in reading during the Logic of English pilot than during the prior school year. Both the percent of students meeting their projected spring to spring growth and the percent of projected RIT points gained increased. The fourth grade students not only made significantly more progress than the previous fourth grade class but significantly exceeded the national NWEA norm.

Reading Level gains - RIT score growth

In 2013 to 2014, before the Logic of English pilot, EES students in most grades fell further below the NWEA norm over the course of the year. On average, the students scored 6.7 points below their grade level norms in Spring 2013 and 8.9 points below in Spring 2014. This is too often the case in high-poverty schools: students not only start out behind in reading and language skills, but fall further behind over time, making below-average gains each year.

During the Logic of English pilot, however, every EES grade made greater RIT gains than the previous class, with an average of 11.9 points growth spring to spring per student compared with 9.3 the previous year. As a result, they did not lose ground compared with their national peers! On average, the students in the LOE pilot scored 6.3 points below the NWEA mean in spring of 2014 (before the pilot) and 6.2 points below in the spring of 2015 (after). So they ended the pilot year .1 point closer to the mean, instead of falling a few points further behind as previously. Grades two through four made actual gains on the national average. Fifth grade again fell slightly further behind, but half a point less than the previous year’s class. First graders also ended the year further below the norm than they started it (3.3 points); in other words, the gains they made in reading were less than their peers. However, they slipped significantly less than the first graders the previous year (7.2 points), closing the gap by more than half (see Grade 1 graph, above). Since the classes are relatively small and these distinctions are less than standard deviation on the students’ score reports, these gains cannot necessarily be seen as significant statistically; at the same time, the fact that the students in aggregate did not lose ground as they had before, and instead closed the gap just slightly, suggests the possibility for positive ongoing growth.

In terms of raw score, every grade except fifth also ended 2015 with a higher mean reading RIT than the previous class’s mean ending score, leaving them better equipped to start the next grade (see class One Year Reading Growth graphs, above). The fifth graders had a significantly lower starting (end of 4th) RIT than the previous class, 197.2 vs. 201.1, and they did not overtake the previous class's ending score. However, they made slightly more progress during the year (4.1 points as opposed to 3.5) than their predecessors.

RIT Growth Projections - school-wide improvement

MAP score reports include both the mean RIT reading score for each grade and how students' growth from test to test compares to projected, nationally-normed growth. These calculations compare student progress to average growth nationally. On average, 50% of a class will make gains smaller than their projected growth, and a class as a whole will meet about 100% of its projected growth. (See Pilot 3, above, for more information about MAP reporting.)

In the previous year, only about a 37% of EES students met their growth projections, and in aggregate the students made 67% of their projected gains in RIT. Both of these figures improved during the Logic of English pilot year.

The percent of students meeting their projected growth, the calculation of how many students made average or above-average gains in reading, rose by 5.8% in the LOE pilot, from 36.7% to 42.4% (National mean: 50%). The percentage dropped in 2nd and 5th grade, but the percentage of students meeting their projections increased by a few percentage points in 3rd grade and significantly in 1st and 4th grade, resulting in an overall improvement in grades 1-5 (see chart).


The students in the LOE pilot gained 18.5% more of their projected RIT points than the previous year - from 67.0% (with an average RIT gain of 9.3 points) to 85.5% (with an average gain of 11.9 RIT points). The percentage of projected growth attained increased in every grade, though the greatest improvement by far was in the 4th grade class (120% of projected growth attained as opposed to 78% the previous year).

This figure indicates that in addition to a greater number of the students making good progress in reading, the students on average made greater individual gains. As the prior year's numbers illustrate, EES was making yearly progress significantly below the national average. In the LOE pilot year there was improvement in this measure in every class, putting annual reading progress closer to the national average and in the fourth grade even exceeding it.


Login Form