SoftwareStudy Design ToolsImpact Analysis ToolsCost, Cost-Effectiveness & Benefit-Cost Analysis Tools
Developers: Elizabeth Tipton, Katie Miller & Larry V. Hedges Purpose: This tool is helpful in selecting sites for experiments and for assessing the generalizability of findings from completed experiments.
Suggested citations:
Funding: The Spencer Foundation Platform/Software: Web-based (Google Chrome preferred)
Developers: Nianbo Dong, Benjamin Kelcey, Rebecca Maynard & Jessaca Spybrook Purpose: This tool allows users to determine the optimal sample size required to achieve specified minimum detectable effect sizes. It also supports computation of the minimum detectable effect size for a specified level of statistical power and precision, given user-inputs about the sample design and size. In both applications, the user is prompted to select the sample design (e.g., randomized controlled trial, interrupted time series or regression discontinuity), the nature of clustering and blocking, and assumptions about the outcomes to be analyzed, the magnitude of intra class correlations, and the number and explanatory power of covariates. The tool produces tables that summarize the sample design assumptions supplied by the user, as well as the tool-generated estimate of the minimum required sample size or the minimum detectable impact. Suggested citations:
Funding: National Science Foundation [DGE-1437679, DGE-1437692, DGE-1437745], and Institute of Education Sciences [R305B090015] Platform/Software: Windows & Mac, requires Microsoft Excel Note: Winner of the 2013 AERA Division H (Research, Evaluation, and Assessment in Schools) Outstanding Publication Award, Advances in Methodology, for this article.
Optimal Design (Software for Multi-Level and Longitudinal Research) Developers: Stephen Raudenbush, Jessaca Spybrook, Howard Bloom, Richard Congdon, Carolyn Hill & Andres Martínez Purpose: Optimal Design allows users to conduct a power analysis and compute minimum detectable effect sizes for studies of individual and group-level interventions. The accompanying manual describes how to conduct a power analysis for individual and group randomized trials. It includes an overview of each design, the appropriate statistical model each design, and the calculation of statistical power and minimum detectable effect sizes. It also includes empirical estimates of design parameters for planning group randomized trials as well as power for meta-analysis and optimal sample allocation for two-level cluster randomized trials. Suggested citations:
Funding: William T. Grant Foundation Platform/Software: Windows
Online Intraclass Correlation Database Developers: Eric C. Hedberg and Larry V. Hedges Purpose: Educational experiments often involve assignment of aggregate units such as schools or school districts (statistical clusters) to treatments. Experiments that do so are called cluster randomized experiments. The sensitivity (statistical power, precision of treatment effect estimates, and minimum detectable effect size) of cluster randomized experiments depends on statistical significance level, sample size, and effect size, but also on the variance decomposition among levels of aggregation (as indicated by intraclass correlation or ICC values at each level of aggregation) and the effectiveness of any covariates used to explain variation at different levels of aggregation (as indicated by R2 values at each level of aggregation). We call the ICC and R2 values design parameters because values of these parameters are necessary to design a cluster randomized experiment that has adequate sensitivity. This database provides empirical estimates of design parameters for two and three level cluster randomized trials that use academic achievement as an outcome variable. These estimates are available for the nation as a whole (based on surveys with national probability samples) and for selected states (based on those states longitudinal data systems, which are essentially an exhaustive sample). Suggested citations:
Funding: National Science Foundation [0129365, 0815295] and Institute for Education Sciences [R305D110032] Platform/Software: Web-based
Mosaic Evidence Synthesis Tools Developers: Martyna Citkowicz, Charlie Ebersole, Karthik Jallawaram, Megha Joshi, Laura Michaelson, David Miller, Joshua Polanin, Joe Taylor, & Ryan Williams from the Methods of Synthesis and Integration Center (MOSAIC) at the American Institutes for Research (contributors listed alphabetically by last name). Purpose: The Methods of Synthesis and Integration Center (MOSAIC) hosts a number of tools. These tools range from assistance with collection and coding of study information using a collaborative software program to exploration of data from completed meta-analyses using evidence gap maps, box plots, and traditional forest plots to interpret and translate meta-analytic findings. Explore these interactive data tools on MOSAIC’s Tools page and stay tuned as the team is working on uploading more datasets and tools in the near future! Suggested Citations: See individual tools. Funding: Institute of Education Sciences [R305A170146], National Science Foundation [EHR-2000672], and American Institutes for Research Platform/Software: Web-based
Assessing ELA Curriculum Shifts: A Practical Guide for Measurement and Progress MonitoringDevelopers: The Annenberg Institute’s EdInstruments team and the Research Partnership for Professional Learning (RPPL) Purpose: The Assessing ELA Curriculum Shifts toolkit is a free, user-friendly set of data collection instruments and recommendations for monitoring progress during English Language Arts (ELA) curriculum shifts in grades 3–12. The toolkit includes six survey scales and observation rubrics that can be used to assess the implementation of high-quality instructional materials (HQIM) by collecting data about school and system conditions, professional learning, instructional practice, teacher beliefs and mindsets, and student outcomes. Suggested Citation:
Funding: Supported by Charles and Lynn Schusterman Family Philanthropies, Bill and Melinda Gates Foundation Platform/Software: Web-based, compatible with most browsers; individual tools downloadable as PDFs
Developer: Peter Schochet, Carol Razafindrakoto, Carlo Caci, Mason DeCamillis & Matthew Jacobus Purpose: The Institute of Education Sciences (IES) has launched a new tool that can make it easier and more cost-effective for states and school districts to evaluate the impact of their programs. RCT-YES is free, user-friendly software that assists those with a basic understanding of statistics and research design in analyzing data and reporting results from randomized controlled trials (RCTs) and other types of evaluation designs. RCT-YES was developed by Mathematica Policy Research, Inc. under a contract from IES' National Center for Education Evaluation and Regional Assistance. While the software has a simple interface and requires no knowledge of programming, it does not sacrifice rigor. RCT-YES uses state-of-the-art statistical methodsto analyze data. For more information on RCT-YES, visit www.rct-yes.com. Suggested citations:
Background: Schochet, P. Z. (2015). Statistical theory for the RCT-YES software: Design-based causal inference for RCTs (NCEE 2015–4011). Washington, DC: U.S. Department of Education, Institute of Education Sciences, National Center for Education Evaluation and Regional Assistance, Analytic Technical Assistance and Development. Accessed 1-22-16 https://ies.ed.gov/ncee/pubs/20154011/pdf/20154011.pdf. Funding: Institute of Education Sciences Platform/Software: Windows and requires R or Stata to be installed on user's computers.
Cost, Cost-Effectiveness & Benefit-Cost Analysis Tools
Developers: Fiona Hollands, Barbara Hanisch-Cerda, Henry Levin, Clive Belfield, Amritha Menon, Robert Shand, Yilin Pan, Ipek Bakir, & Henan Cheng. Purpose: CostOut facilitates the estimation of costs and cost-effectiveness of educational or other social programs. It is primarily designed for researchers, analysts, educational administrators, and policymakers. CostOut is based on the “ingredients method” and includes a database of around 700 national average prices of educational resources. Users may also build their own databases of local prices or foreign currency prices and can customize the inflation and geographical indices. CostOut allows for multiple programs to be compared in one analysis. Suggested citations:
Funding: Institute of Education Sciences Platform/Software: Web-based |