|
Dr. Elizabeth Tipton is an Associate Professor of Statistics, a Faculty Fellow in the Institute for Policy Research, and the co-Director of the Statistics for Evidence Based Policy and Practice (STEPP) Center at Northwestern University. Her research focuses on the development of methods for improving generalizations from experiments, both through the design and analysis of field experiments and through the use of meta-analysis. |
Dr. Rob Olsen is a Senior Fellow at the George Washington Institute of Public Policy at George Washington University. His research focuses on the generalizability of randomized trials in education. Olsen is the Principal Investigator of a study testing different sampling methods for selecting sites for impact studies, and he is the co-Principal Investigator for a study testing different regression methods for predicting the impacts of educational interventions in individual schools using data from multi-site randomized trials. Finally, he consults for national research organizations on how to design and implement impact studies for greater generalizability. |
In studies that evaluate the impacts of educational interventions, educators and policymakers typically want to know whether an intervention improved outcomes. However, researchers cannot provide a simple yes/no answer because all impact estimates have statistical uncertainty. Researchers have often used statistical significance and p-values to assess this uncertainty, but these statistics are often misinterpreted and cannot help educators and policymakers figure out how likely it was that the intervention had an effect.
This webinar lays out a Bayesian framework for interpreting impact estimates without the pitfalls of relying only on p-values. This framework allows researchers to calculate the probability an intervention had a meaningful effect, given the impact estimate and prior evidence on similar interventions. The webinar will explain key concepts and introduce a convenient, easy-to-use Excel tool for applying this framework. With these concepts and tools, researchers can extract more accurate and interpretable lessons from impact findings to support informed decisions about educational interventions. The webinar is based on IES’s recently released guide on this topic.
Dr. John Deke is an economist and senior fellow at Mathematica. His work has focused primarily on the statistical methodologies used in impact evaluations and systematic reviews, especially in K-12 education.
Dr. Mariel Finucane is a principal statistician at Mathematica. Her work uses Bayesian hierarchical modeling and tree-based methods to study social policies.
In the era of open science practices, researchers are more frequently being asked to share data from their studies. Multiple federal agencies and private funders have added policies and requirements for their grantees and contractors to share study data. Consistent with this priority and the Standards for Excellence in Education Research (SEER), the Institute of Education Sciences (IES) requires researchers to make data from IES-funded studies available to others, while also protecting the rights of study participants and meeting the requirements of data providers. Such policies, along with growing interest in open science practices, means that many education researchers are looking for practical, achievable strategies for making their data available to others.
In this 90-minute webinar, Ruth Neild from Mathematica will present an overview of IES’s guide, Sharing Study Data: A Guide for Education Researchers, and a panel of researchers will share their expertise and experiences with three key aspects of sharing study data:
The session will include a panel discussion and audience Q&A.
Panelists:
High-quality impact studies of education interventions provide the strongest evidence about whether interventions improve academic outcomes. Yet information about whether and by how much a tested intervention improves outcomes is only part of the story. To learn why and how impact findings vary and to support the broader use of effective interventions, educators need to understand how, and under what conditions, an intervention was implemented. High-quality implementation research can contribute to these understandings. To encourage this important work, the Institute of Education Sciences (IES) articulates, through the Standards for Excellence in Education Research (SEER), key recommendations concerning documenting treatment implementation and contrast.
In this 90-minute webinar, Carolyn Hill from MDRC and Lauren Scher from Mathematica will present an overview of IES’s recently-released guide, Conducting Implementation Research in Impact Studies of Education Interventions. They will be joined by a panel of researchers who will share their experiences conducting implementation research as part of high-quality impact studies of education interventions, including:
The session will include a panel discussion and audience Q&A.
Panelists: