SREE-IES Webinar Series

Applying Standards for Excellence in Education Research

The Institute for Education Sciences (IES) has launched the Standards for Excellence in Education Research (SEER) to make education research more transparent, actionable, and focused on consequential outcomes. To support this effort, IES is producing a series of practical guides for researchers on how to implement SEER to improve the quality and relevance of their studies. 

SREE is excited to partner with IES to sponsor webinars that each cover the recommendations from a specific guide aligned with the SEER. These webinars are free, open to the public, and relevant to all researchers who seek to ensure their studies are useful to policymakers and educators. This webinar series is also co-sponsored by APPAM. 

 


 PREVIOUS WEBINARS

Webinar 1: Enhancing the Generalizability of Impact Studies in Education

May 16, 2022

Webinar Recording

Generalizability in education research indicates how well the results of a study apply to broader populations of interest to educators and policymakers. However, in studies that evaluate the impacts of educational interventions, the generalizability of study findings is often unclear because each study is conducted with only a specific sample. Educators and policymakers may question whether such findings can help them make decisions about how to best support the populations they serve.

This webinar lays out recommendations that researchers can take to enhance generalizability when planning impact studies in education. It walks researchers through the key steps involved in identifying a target population of schools, developing a list of schools in this population, and selecting a representative sample. It also provides an overview of steps to recruit schools into the study, assess and adjust for differences between the sample and population, and report generalizability appropriately. The webinar is based on IES’s recently released guide on this topic.

Dr. Elizabeth Tipton is an Associate Professor of Statistics, a Faculty Fellow in the Institute for Policy Research, and the co-Director of the Statistics for Evidence Based Policy and Practice (STEPP) Center at Northwestern University. Her research focuses on the development of methods for improving generalizations from experiments, both through the design and analysis of field experiments and through the use of meta-analysis.
Dr. Rob Olsen is a Senior Fellow at the George Washington Institute of Public Policy at George Washington University. His research focuses on the generalizability of randomized trials in education. Olsen is the Principal Investigator of a study testing different sampling methods for selecting sites for impact studies, and he is the co-Principal Investigator for a study testing different regression methods for predicting the impacts of educational interventions in individual schools using data from multi-site randomized trials. Finally, he consults for national research organizations on how to design and implement impact studies for greater generalizability.

 

Webinar 2: The BASIE (BAyeSian Interpretation of Estimates) Framework for Interpreting Findings from Impact Evaluations
June 13, 2022

Webinar Recording

In studies that evaluate the impacts of educational interventions, educators and policymakers typically want to know whether an intervention improved outcomes. However, researchers cannot provide a simple yes/no answer because all impact estimates have statistical uncertainty. Researchers have often used statistical significance and p-values to assess this uncertainty, but these statistics are often misinterpreted and cannot help educators and policymakers figure out how likely it was that the intervention had an effect.

This webinar lays out a Bayesian framework for interpreting impact estimates without the pitfalls of relying only on p-values. This framework allows researchers to calculate the probability an intervention had a meaningful effect, given the impact estimate and prior evidence on similar interventions. The webinar will explain key concepts and introduce a convenient, easy-to-use Excel tool for applying this framework. With these concepts and tools, researchers can extract more accurate and interpretable lessons from impact findings to support informed decisions about educational interventions. The webinar is based on IES’s recently released guide on this topic.

Dr. John Deke is an economist and senior fellow at Mathematica. His work has focused primarily on the statistical methodologies used in impact evaluations and systematic reviews, especially in K-12 education.

Dr. Mariel Finucane is a principal statistician at Mathematica. Her work uses Bayesian hierarchical modeling and tree-based methods to study social policies.

 

Webinar 3: Sharing Study Data: Implications for Education Researchers

March 28th, 2:00pm - 3:30pm ET

Webinar Recording

In the era of open science practices, researchers are more frequently being asked to share data from their studies. Multiple federal agencies and private funders have added policies and requirements for their grantees and contractors to share study data. Consistent with this priority and the Standards for Excellence in Education Research (SEER), the Institute of Education Sciences (IES) requires researchers to make data from IES-funded studies available to others, while also protecting the rights of study participants and meeting the requirements of data providers. Such policies, along with growing interest in open science practices, means that many education researchers are looking for practical, achievable strategies for making their data available to others.

In this 90-minute webinar, Ruth Neild from Mathematica will present an overview of IES’s guide, Sharing Study Data: A Guide for Education Researchers, and a panel of researchers will share their expertise and experiences with three key aspects of sharing study data:

  • Managing disclosure risks
  • Documenting and organizing data for other researchers
  • Depositing data for access by other researchers

The session will include a panel discussion and audience Q&A.

Panelists:

  • John Czajka is a Mathematica Senior Fellow Emeritus. Dr. Czajka has expertise on disclosure risk management and has vast experience connected to statistical applications of program administrative data, the analysis of survey data from large national samples, panel studies, and specialized surveys.
  • Jessica Logan is an Associate Professor of Special Education at Vanderbilt University’s Peabody College. Dr. Logan is a quantitative methodologist who focuses on improving the data management and data quality practices of researchers in the education sciences, encouraging transparency in science, and improving education researchers’ statistical conclusion validity.
  • John Diamond is MDRC’s lead for Data Management and Integrity and a senior research associate for the Postsecondary Education policy area. He chairs MDRC’s Data Integrity Board, Data Management Initiative, and Data Security Team, and coordinates MDRC’s data sharing with ICPSR at the University of Michigan.

 

Webinar 4: Conducting Implementation Research in Impact Studies of Education Interventions 

August 16, 2:00pm - 3:30pm ET

Webinar Recording

High-quality impact studies of education interventions provide the strongest evidence about whether interventions improve academic outcomes. Yet information about whether and by how much a tested intervention improves outcomes is only part of the story. To learn why and how impact findings vary and to support the broader use of effective interventions, educators need to understand how, and under what conditions, an intervention was implemented. High-quality implementation research can contribute to these understandings. To encourage this important work, the Institute of Education Sciences (IES) articulates, through the Standards for Excellence in Education Research (SEER), key recommendations concerning documenting treatment implementation and contrast.

In this 90-minute webinar, Carolyn Hill from MDRC and Lauren Scher from Mathematica will present an overview of IES’s recently-released guide, Conducting Implementation Research in Impact Studies of Education Interventions. They will be joined by a panel of researchers who will share their experiences conducting implementation research as part of high-quality impact studies of education interventions, including:

  • Addressing some common issues when measuring intervention fidelity
  • Mitigating challenges when measuring implementation contrast
  • Integrating implementation and impacts

The session will include a panel discussion and audience Q&A.

Panelists: 

  • Barbara Goodson, Principal Associate at Abt Associates, has led numerous implementation and impact evaluations of educational programs for children birth through grade 12. She leads the implementation evaluation technical assistance for grantees in the U.S. Department of Education’s Education Innovation and Research (EIR) program and has worked with more than 300 grantees to measure and report on fidelity of implementation.
  • Catherine Darrow, Associate Director of Research at J-PAL North America, leads J-PAL’s research activities producing evidence to better understand whether and how programs work to reduce poverty in the North America region. She has designed and directed rigorous impact and implementation evaluations of school-based programs serving children to young adults.
  • Howard Bloom, Former MDRC Chief Social Scientist, led MDRC’s development and application of experimental and quasi-experimental methods from 1999 to 2017. For the previous two decades, he taught research methods, program evaluation, and applied statistics to public policy graduate students at Harvard and NYU, where he received the university-wide Great Teacher Award in 1993. For his many methodological contributions, Dr. Bloom received the Peter Rossi Award from APPAM in 2010 and was inducted into the National Academy of Education in 2019.