Background...

All students enter school with a combination of "headwinds" and "tailwinds". Tailwinds are the things that make school easier for students. Tailwinds may include factors such as coming from a home with parents of high education levels and economic stability, being a native English speaker, not having a disability, or being a member of the cultural majority. Each of those characteristics plays a role in helping a student experience success in school.

Headwinds on the other hand make school more difficult. Headwinds can include having economic instability at home, parents with lower levels of education, having a disability, or still learning English. The more headwinds a student has, the more difficulty they will have in maximizing their academic potential and the more “tailwinds” they will need. Tailwinds come in the form of high-quality instruction, support, and intervention.

The Academic Support Index, or ASI, quantifies these headwinds. A student’s ASI is the sum of their headwinds. Their ASI can also be considered a measure of the amount of support that they will need in order to mitigate the impact of those educational headwinds. Students with a low ASI will likely need very little additional support outside of Tier 1 instruction. Higher ASI students will likely need proportionally higher amounts of Tier 2 and sometimes Tier 3 supports.

There is a strong relationship between the ASI and academic outcomes including assessments such as the SAT, Smarter Balanced Assessments, AP and IB tests, kindergarten screeners, grade point averages, rates of college eligibility, matriculation, and degree attainment. We have studied these effects over seven years of data as well as across urban, suburban, and rural schools. To date over 400,000 students have been scored on the ASI. (See the featured post below for a list of papers and presentations on the ASI).

Because the ASI is able to reliably predict student outcomes you have to opportunity to interrupt that predictability by using the ASI to make sure that you are identifying the right students for early intervention and support. With effective intervention, predictive analytics can become preventive analytics.

Friday, April 26, 2019

AERA Division H Outstanding Paper Award for Advances in Methodology

At the 2019 American Educational Research Association meeting in Toronto our paper "Maximizing Assessment Performance of At-Risk Students Using the Academic Support Index to Engineer a Low Stress Testing Environment" was recognized by Division H (Research, Evaluation, and Assessment in Schools) in their Outstanding Paper competition under the category for Advances in Methodology.  


Maximizing Assessment Performance of At-Risk Students Using the Academic Support Index to Engineer a Low Stress Testing Environment

Abstract
The chronic underperformance on standardized assessments of students identified as at-risk is foundational to racial and socioeconomic achievement gaps (Reardon, 2011). Testing students in academically heterogenous groups has the potential to raise testing anxiety for mid to low- performing students and negatively impact student performance (Cassady, 2002). Our study attempted to mitigate the impact of negative stereotypes students may have about themselves based on their academic status relative to their higher-achieving peers. We used the Academic Support Index (Stevens, 2015) to create academically homogeneous groups to engineer testing environments where concerns about comparisons should be lessened. We used a randomized controlled design to assign students to either the treatment or control groups. We confirmed homogeneity across groups for both historical academic performance (prior Smarter Balanced Assessment English Language Arts scores, 10th grade local assessment writing scores) and two psychosocial constructs (Academic Self-Perception and Motivation). The rate of students performing at grade level was higher for students randomly assigned to the treatment group (64%, n = 28) vs. the control group (28%, n = 32). Results were statistically significant (p = 0.004) and the effect size was substantial (Cohen’s d = 0.74). Post-assessment surveys provided further insight into how students experienced the testing environments. This study validated results from two prior experiments conducted in 2014 and 2015 (Stevens, 2015).

Read the full paper here.



Learn more about using the ASI framework in your school or district here.