FastBridge Research Foundations

By: Rachel Brown, Ph.D., NCSP

FastBridge Learning staff are often asked what is the research base for our assessments. This is an easy question to answer because there are many peer-reviewed published articles that document the strong research foundations for all FastBridge tools. Before FastBridge Learning makes any assessment available, substantial research has been conducted to design, evaluate, and validate the tool.

Initial Development

All FastBridge Learning content starts in a research lab at the University of Minnesota or another university. These labs are directed by leading educational researchers who have strong track records for contributing important resources for use in schools. Ideas for new assessments or other tools come from reviews of the available research indicating what resources and tools schools need. An idea is translated into a research study by developing a detailed research plan that defines the materials, participants, procedures, and how the outcomes will be measured. In some cases, small “pilot” studies that check out the plans and identify any needed changes are conducted before a major study is initiated. As needed, data from the pilot studies will be used to improve the research plan.

Controlled Studies

Once the research plan is verified, one or more controlled studies are conducted according to the details outlined in the research plan. These studies are called “controlled” because all of the materials and steps are used in very precise ways in order to test out whether the specific method defined in the study leads to the anticipated outcome. Controlled studies are important because they provide a way to see if the idea works under ideal conditions. Not all controlled studies work out how the researchers planned. When the results are not what the researchers expected, a new study might be developed that adjusts specific details and then the new study is conducted. When results show that the new idea worked to help students and teachers, that information is put into a draft article and submitted to a peer-reviewed journal.

Peer-reviewed articles. Research articles that are submitted for peer review are then sent to two or more other researchers who are experts in the same topic as the study described. These other experts are peers in the research community who will read the draft article and provide detailed feedback about how well the article explains the research method and outcomes. These detailed reviews are then sent to the authors with instructions for how to revise the article to make it better. Not all articles submitted to peer-reviewed journals are published and only those that demonstrate very high quality research as well as important outcomes are actually published in journals. All FastBridge Learning assessments have followed the research process outlined here and the findings of this research have been published in one or more peer-reviewed articles. All of the published articles to date are listed at the end of this article.

Lab Status

After a new assessment has been studied and confirmed to be effective in one or more peer-reviewed research articles, it will be submitted to FastBridge Learning for possible use as part of the online system. Not all innovations are included right away, and sometimes a new assessment will be added in “Lab” status. Lab status means that the assessment has been researched and verified in controlled studies but not with large numbers of students in everyday classroom settings. Lab tools are available for trial use by FastBridge Learning customers but are not yet endorsed for use as the only or primary indicator for instructional decision making. Usually, Lab status is for one school year and allows FastBridge Learning to collect data about how the tool works when used in everyday school settings. We use the data collected during the Lab phase to decide when a tool is ready to be used as an endorsed assessment in the FastBridge Learning system. For more information about Lab status see the blog on this topic.

Endorsed Assessment

When the data collected during the lab phase of a new assessment indicate that it works as intended for students and teachers, it is released from Lab and becomes an endorsed assessment. Endorsed assessments are those which have multiple sources of high quality research indicating that they help teachers identify and solve problems that students have in schools. In some cases, other research about endorsed assessments has been conducted in addition to that created by the primary researchers. For example, some FastBridge Learning assessments have been included in research conducted by others, including doctoral students, faculty, and state departments of education. FastBridge Learning welcomes the use of its measures in ongoing research. Investigators are encouraged to contact FastBridge Learning at help@fastbridge.org to learn if their study might be eligible for low or no-cost access to our assessments.

A cornerstone of the FastBridge Learning commitment to high quality assessments and resources for schools is that they must have quality research documenting how they work. FastBridge Learning incorporates a multi-step research process to review all possible system measures. Through controlled studies, a Lab phase, and finally endorsed use, FastBridge Learning offers educators highly effective assessments for assisting students. Our measures are designed to be used as part of a problem-solving process embedded within a multi-tier system of support (MTSS). When used in this manner, FastBridge Learning assessments offer research to results.

References

Behavior

Chafouleas, S. M., Briesch, A. M., Riley-Tillman, T. C., Christ, T. J., Black, A. C., & Kilgus, S. P. (2010). An investigation of the generalizability and dependability of direct behavior rating single item scales (DBR-SIS) to measure academic engagement and disruptive behavior of middle school students. Journal of School Psychology, 48, 219-246. doi:10.1016/j.jsp.2010.02.001

Chafouleas, S. M., Christ, T. J., Riley-Tillman, T. C., Briesch, A. M., & Chanese, J. A. M. (2007). Generalizability and dependability of direct behavior ratings to assess social behavior of preschoolers. School Psychology Review, 36, 63-.

Chafouleas, S. M., Kilgus, S. P., Jaffery, R., Riley-Tillman, T. C., Welsh, M., & Christ, T. J. (2013). Direct behavior rating as a school-based behavior screener for elementary and middle grades. Journal of School Psychology, 5, 367-. doi:10.1016/j.jsp.2013.04.002

Christ, T. J., Nelson, P. M., Van Norman, E. R., Chafouleas, S. M., & Riley-Tillman, T. C. (2014). Direct behavior rating: An evaluation of time-series interpretations as consequential validity. School Psychology Quarterly, 29, 157-170. doi:10.1037/spq0000029

Christ, T. J., Riley-Tillman, T. C., Chafouleas, S. M., & Boice, C. H. (2010). Direct behavior rating (DBR): Generalizability and dependability across raters and observations. Educational and Psychological Measurement, 70, 825-843. doi:10.1177/0013164410366695

Christ, T. J., Riley-Tillman, T. C., Chafouleas, S., & Jaffery, R. (2011). Direct behavior rating: An evaluation of alternate definitions to assess classroom behaviors. School Psychology Review, 40, 181-.

Eklund, K., Kilgus, S., von der Embse, N.P., *Beardmore, M., & *Tanner, N. (in press). Use of universal screening scores to predict distal academic and behavioral outcomes: A multi-level approach. Psychological Assessment.

Fabiano, G.A., Vujnovic, R., Pelham, W.E., Waschbusch, D.A., Massetti, G.M., Yu, J., Pariseau, M.E., Naylor, J., Robins, M.L., Carnefix, T., Greiner, A.R., Volker, M. (2010). Enhancing the effectiveness of special education programming for children with ADHD using a daily report card. School Psychology Review, 39, 219-239.

Fabiano, G.A., Vujnovic, R., Naylor, J., Pariseau, M., & Robins, M.L. (2009). An investigation of the technical adequacy of a daily behavior report card (DBRC) for monitoring progress of students with attention-deficit/hyperactivity disorder in special education placements. Assessment for Effective Intervention, 34, 231-241.

Kilgus, S. P., Chafouleas, S. M., & Riley-Tillman, T. C. (2013). Development and initial validation of the Social and Academic Behavior Risk Screener for elementary grades. School Psychology Quarterly, 28, 210-226. doi:10.1037/spq0000024

Kilgus, S. P., & Eklund, K. (2016). Consideration of base rates within universal screening for behavioral and emotional risk: A novel procedural framework. School Psychology Forum, 10, 120-130.

Kilgus, S.P., Eklund, K.R., von der Embse, N.P., *Taylor, C.N., & *Sims, W.A. (2016). Psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) Teacher Rating Scale and multiple gating procedure within elementary and middle school samples. Journal of School Psychology, 58, 21-39. Doi: 10.1016/j.jsp.2016.07.001

Kilgus, S.P., Kazmerski, J.S., *Taylor, C.N., & von der Embse, N.P. (2016). Intervention Selection Profile—Function (ISP-Fx): A brief and direct method for functional behavioral assessment. School Psychology Quarterly. doi: 10.1037/spq0000156

Kilgus, S. P., Riley-Tillman, T. C., Chafouleas, S. M., Christ, T. J., & Welsh, M. E. (2014). Direct behavior rating as a school-based behavior universal screener: Replication across sites. Journal of School Psychology, 52, 63-82. doi:10.1016/j.jsp.2013.11.002

Kilgus, S.P., *Sims, W., von der Embse, N.P., & Riley-Tillman, T.C. (2015). Confirmation of models for interpretation and use of the Social and Academic Behavior Risk Screener. School Psychology Quarterly, 30, 335-352. doi: 10.1037/spq0000087.

Kilgus, S. P., *Sims, W., von der Embse, N.P., & *Taylor, C. (2016). Psychometric defensibility of the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS) teacher rating scale. Assessment for Effective Intervention. doi: 10.1177/1534508415623269

Kilgus, S.P., Bowman, N.A., Christ, T.J., & Taylor, C.N. (2017).  Predicting academics via behavior within in an elementary sample: An evaluation fo the social, academic, and emotional behavior risk screener (SAEBRS). Psychology in the Schools, 54, 246–260.

Nelson, P.M. & Christ, T.J. (2016). Reliability and agreement in student ratings of the class environment. School Psychology Quarterly. Advanced online publication.

Nelson, P.M., Demers, J., & Christ, T.J. (2014). The Responsive Environmental Assessment for Classroom Teaching (REACT): Exploring the dimensionality of student perceptions of the instructional environment. School Psychology Quarterly, 29, 182-197.

Nelson, P.M. & Hall, G.E., & Christ, T.J. (2016). The consistency of student perceptions of the class environment. Journal of Applied School Psychology, 32, 254-267.

Nelson, P.M., Reddy, L., Dudek, C., & Lekwa, A. (in press). Student and observer ratings of the class environment: A preliminary investigation of convergence. School Psychology Quarterly.

Nelson, P.M., Ysseldyke, J.E., & Christ, T.J. (2015). Student perceptions of the classroom environment: Actionable feedback as a guide for improving core instruction. Assessment for Effective Intervention, 1-12.

Riley-Tillman, T. C., Christ, T. J., Chafouleas, S. M., Boice-Mallach, C. H., & Briesch, A. (2011). The impact of observation duration on the accuracy of data obtained from direct behavior rating (DBR). Journal of Positive Behavior Interventions, 13, 119-128. doi:10.1177/1098300710361954

Skaar, N. R., Christ, T. J., & Jacobucci, R. (2014). Measuring adolescent prosocial and health risk behavior in schools: Initial development of a screening measure. School Mental Health, 6, 137-149. doi:10.1007/s12310-014-9123-y

von der Embse, N.P., *Iaccarino, S., *Mankin, A., Kilgus, S., & Magen, E. (accepted with revisions). Development and factor structure of the Social, Academic, and Emotional Behavior Risk Screener Student Rating Scale (SAEBRS-SRS). Assessment for Effective Intervention.

von der Embse, N.P., Pendergast, L., Kilgus, S. P., & Eklund, K. (2016). Evaluating the applied use of a mental health screener: Structural validity of the Social, Academic, and Emotional Behavior Risk Screener (SAEBRS). Psychological Assessment. doi: 10.1037/pas0000253

Vujnovic, R.K., Fabiano, G.A., Pariseau, M.E., Naylor, J. (2013). Parameters of adherence to a yearlong daily report card (DRC) intervention for students with Attention-Deficit Hyperactivity Disorder (ADHD). Journal of Educational and Psychological Consultation, 23, 140-163.

Math

Christ, T. J., Johnson‐Gros, K. N., & Hintze, J. M. (2005). An examination of alternate assessment durations when assessing multiple‐skill computational fluency: The generalizability and dependability of curriculum‐based outcomes within the context of educational decisions. Psychology in the Schools, 42, 615-622. doi:10.1002/pits.20107

Christ, T. J., Nelson, P. M., Van Norman, E. R., Chafouleas, S. M., & Riley-Tillman, T. C. (2014). Direct behavior rating: An evaluation of time-series interpretations as consequential validity. School Psychology Quarterly 29, 157-170. doi:10.1037/spq0000029

Christ, T. J., Riley-Tillman, T. C., Chafouleas, S. M., & Boice, C. H. (2010). Direct behavior rating (DBR): Generalizability and dependability across raters and observations. Educational and Psychological Measurement, 70, 825-843. doi:10.1177/0013164410366695

Christ, T. J., Riley-Tillman, T. C., Chafouleas, S., & Jaffery, R. (2011). Direct behavior rating: An evaluation of alternate definitions to assess classroom behaviors. School Psychology Review, 40, 181-.

Christ, T. J., Scullin, S., Tolbize, A., & Jiban, C. L. (2008). Implications of recent research: Curriculum-based measurement of math computation. Assessment for Effective Intervention, 33, 198-205. doi:10.1177/1534508407313480

Christ, T. J., & Schanding, G. T., Jr. (2007). Curriculum-based measures of computational skills: A comparison of group performance in novel, reward, and neutral conditions. School Psychology Review, 36, 147-.

Christ, T. J., & Vining, O. (2006). Curriculum-based measurement procedures to develop multiple-skill mathematics computation probes: Evaluation of random and stratified stimulus-set arrangements. School Psychology Review, 35, 387-.

Hintze, J. M., Christ, T. J., & Keller, L. A. (2002). The generalizability of CBM survey-level mathematics assessments: Just how many samples do we need? School Psychology Review, 31, 514-.

Reading

Ardoin, S. P., & Christ, T. J. (2009). Curriculum-based measurement of oral reading: Standard errors associated with progress monitoring outcomes from DIBELS, AIMSweb, and an experimental passage set. School Psychology Review, 38, 266-.

Ardoin, S. P., Christ, T. J., Morena, L. S., Cormier, D. C., & Klingbeil, D. A. (2013). A systematic review and summarization of the recommendations and research surrounding curriculum-based measurement of oral reading fluency (CBM-R) decision rules. Journal of School Psychology, 51, 1-18. doi:10.1016/j.jsp.2012.09.004

Ardoin, S. P., Eckert, T. L., Christ, T. J., White, M. J., Morena, L. S., January, S. A., & Hine, J. F. (2013). Examining variance in reading comprehension among developing readers: Words in context (curriculum-based measurement in reading) versus words out of context. School Psychology Review, 42, 243-.

Ardoin, S. P., Williams, J. C., Christ, T. J., Klubnik, C., & Wellborn, C. (2010). Examining readability estimates’ predictions of students’ oral reading rate: Spache, lexile, and forcast. School Psychology Review, 39, 277-.

Christ, T. J. (2006). Short-term estimates of growth using curriculum-based measurement of oral reading fluency: Estimating standard error of the slope to construct confidence intervals. School Psychology Review, 35, 128-.

Christ, T. J., & Ardoin, S. P. (2009). Curriculum-based measurement of oral reading: Passage equivalence and probe-set development. Journal of School Psychology, 47, 55-75. doi:10.1016/j.jsp.2008.09.004

Christ, T. J., & Ardoin, S. P. (2015;2014;). Commentary on new metrics, measures, and uses for fluency data. Reading and Writing,28, 151-157. doi:10.1007/s11145-014-9513-4

Christ, T. J., & Silberglitt, B. (2007). Estimates of the standard error of measurement for curriculum-based measures of oral reading fluency. School Psychology Review, 36, 130-.

Christ, T. J., Silberglitt, B., Yeo, S., & Cormier, D. (2010). Curriculum-based measurement of oral reading: An evaluation of growth rates and seasonal effects among students served in general and special education. School Psychology Review, 39, 447-.

Christ, T. J., White, M. J., Ardoin, S. P., & Eckert, T. L., (2013). Curriculum based measurement in reading: Consistence and validity across best, fastest, and question reading conditions. School Psychology Review, 42, 415-436.

Christ, T. J., Zopluoglu, C., Long, J. D., & Monaghen, B. D. (2012). Curriculum-based measurement of oral reading: Quality of progress monitoring outcomes. Exceptional Children, 78, 356-373.

Christ, T. J., Zopluoglu, C., Monaghen, B. D., & Van Norman, E. R. (2013). Curriculum-based measurement of oral reading: Multi-study evaluation of schedule, duration, and dataset quality on progress monitoring outcomes. Journal of School Psychology, 51, 19-57. doi:10.1016/j.jsp.2012.11.001

Hintze, J. M., & Christ, T. J. (2004). An examination of variability as a function of passage variance in CBM progress monitoring. School Psychology Review, 33-, 204-.

January, S. -A.A., Ardoin, S. P. , Christ, T.J., Eckert, T.L., & White, M.J. (in press). Evaluating the interpretations and use of curriculum-based measurement in reading and word lists for universal screening in first and second grade. School Psychology Review.

Kendeou, P., McMaster, K. L., & Christ, T. J. (2016). Reading comprehension: Core components and processes. Policy Insights from the Behavioral and Brain Sciences, doi:10.1177/2372732215624707

Thornblad, S. C., & Christ, T. J. (2014). Curriculum-based measurement of reading: Is 6 weeks of daily progress monitoring enough? School Psychology Review, 43, 19-.

Van Norman, E. R., Christ, T. J., & Zopluoglu, C. (2013). The effects of baseline estimation on the reliability, validity, and precision of CBM-R growth estimates. School Psychology Quarterly, 28, 239-255. doi:10.1037/spq0000023

Yeo, S., Fearrington, J. Y., & Christ, T. J. (2012). Relation between CBM-R and CBM-mR slopes: An application of latent growth modeling. Assessment for Effective Intervention, 37, 147-158. doi:10.1177/1534508411420129

Dr. Rachel Brown is FastBridge Learning’s Senior Academic Officer. She previously served as Associate Professor of Educational Psychology at the University of Southern Maine. Her research focuses on effective academic assessment and intervention, including multi-tier systems of support, and she has authored several books on Response to Intervention and MTSS.


Dr. Rachel Brown is FastBridge Learning’s Senior Academic Officer. She previously served as Associate Professor of Educational Psychology at the University of Southern Maine. Her research focuses on effective academic assessment and intervention, including multi-tier systems of support, and she has authored several books on Response to Intervention and MTSS. 

Ready for faster, more accurate results?


The Way to Faster Information

Sign up for the FAST™ Insights Blog to receive a first look at helpful new content and updates from FastBridge Learning.


boingnet