SREE Webinars & Workshops

Researchers of Color Virtual Happy Hour
August 11, 2021, 1 PM EDT 

Register Here

Join SREE Researchers of Color at their first Virtual Happy Hour to learn a bit about who they are and what they hope to accomplish. Researchers of Color and allies across all career stages, disciplines, and industries are welcome to gather with us, explore relevant issues, and network.

SREE Researchers of Color (SROC) exists to create spaces for community, professional development, and networking among researchers of color who are part of the larger SREE community. 

 

Past Webinars

 

REES 101: A Guided Tour of Study Registration
Presenter: Jessaca Spybrook, Western Michigan University

Webinar Recording

 

Data Collection Methods for Cost-Effectiveness Analysis of Educational Interventions
Tuesday, May 18, 2021

Presenters: Rebecca Davis, University of Pennsylvania & Viviana Rodriguez, Columbia University

The Center for Benefit Cost Studies in Education at the University of Pennsylvania is proud to partner with SREE to offer a webinar on data collection in cost analysis. Cost studies offer important context to effectiveness work and are increasingly being required by funders, yet the “how to” of cost estimation is still ambiguous to many researchers. This webinar will offer clarity on the data collection phase of the cost estimation process. Using the ingredients method (Levin, McEwan, Belfield, Bowden, & Shand, 2018), we will explore data collection methods useful to researchers hoping to include estimation of costs in their existing studies or in funding proposals. This workshop will cover how to develop a cost data collection plan, potential sources of data, and potential pitfalls to avoid. We will discuss how the integration of cost data collection with other study elements can be helpful in efficiently adding cost estimation to a larger study. A preliminary understanding of the ingredient’s method is recommended but not required, and a brief introduction will be provided. Additional materials will be shared to help participants successfully plan each phase of their cost analysis.

 

Panel Data Methods for Policy Evaluation in Education Research
April 20, 2021
Presenter: Avi Feller, University of California - Berkeley

Many important interventions and policy changes in education occur at an aggregate level, such as the level of the school, district, or state; prominent examples include school finance policies, curriculum development, and accountability changes. These policies are often difficult to evaluate experimentally, and education researchers instead rely on research designs based on repeated observations ("panel data") at the aggregate level. For example, we might estimate the impact of a new reading program using school-level average test scores at multiple time points surrounding its introduction. In this workshop, we will review the growing set of statistical tools for estimating causal effects with panel data of this form. We will first review common methods, such as difference-in-differences, fixed effects models, and Comparative Interrupted Times Series, as well as key conceptual issues, such as changes in measurement. We will then discuss complications that arise when treatment timing varies. Finally, we will briefly introduce some more recent methods that also incorporate matching and weighting. Throughout, we will use plenty of cartoons and bad jokes.

 

Designing and Reporting Your Study to Facilitate Evidence Clearinghouse Reviews and Meta-Analysis
March 15, 2021
Presenter: Sandra Wilson, Abt Associates

Have you ever received a request for more information about your intervention research from someone doing a systematic review or meta-analysis? Would you like to learn more about how to report your study and its findings to facilitate inclusion in an evidence clearinghouse or meta-analysis? This 90-minute webinar will involve a comprehensive discussion of the types of study information needed by systematic reviews, meta-analyses, and evidence clearinghouses when reviewing intervention research.

The webinar will begin with a brief overview of systematic review and meta-analysis methods and their purposes. The presenter will then highlight the common types of information needed by evidence reviewers when identifying and locating studies, screening them for eligibility, assessing their quality, and extracting information about the study characteristics and findings, including technical information about study design and methods, study findings, and characteristics of interventions, comparison groups, study participants, and implementation strategies. The webinar will also review the variety of reporting guides and resources that are available for researchers to facilitate study reporting.

 

Testing the Effectiveness of Strategies to Promote Research Use: Learning from Studies in Education and Mental Health Settings
March 5, 2021

Organizer: Ruth Neild, Mathematica
Speakers: Kimberly Becker, University of South Carolina; Bruce Chorpita, University of California - Los Angeles; Aubyn Stahmer, University of California - Davis
Discussant: Adam Gamoran, William T. Grant Foundation

Webinar Recording

This is  the second webinar in a series focused on the use of research evidence in education. During this moderated discussion, two research teams will describe their studies that rigorously test the effectiveness of strategies for promoting research use in mental health and education settings.

This session, funded by the William T. Grant Foundation, is part of SREE’s virtual convening, Examining Education Research through the 2020 Lens.

 

Making Research Matter: Insights from Research on Research UseVivian Wong
February 10, 2021
Presenter: Vivian Tseng, William T. Grant Foundation

Webinar Recording     Presentation Slides

Many of us in the research community conduct research because we hope it will make a difference in policy or practice, and yet research often fails to have the kind of impact that we aspire to achieve. In this presentation, Vivian Tseng will discuss what we know from research on the use of research evidence in policy and practice. She will discuss when, how, and under what conditions research is used, in addition to what it takes to improve the use of research evidence. Vivian will draw upon the William T. Grant Foundation’s support for over 60 studies on this topic over the past dozen years, as well as insights from studies in other countries and from sectors as diverse as environmental policy, health, and human services. Our hope is that what you learn from research on research use may help you: 1) develop your own studies of research use, and 2) inform your efforts to get your research used more frequently and productively in policy and practice.

 

Bayesian Interpretation of Impact Estimates from Education Evaluations
July 28, 2020
Presenters: John Deke and Mariel Finucane, Mathematica

Webinar Recording

This webinar will illustrate the pitfalls of misinterpreting statistical significance in evaluations of education interventions, describe BASIE (BAyeSian Interpretation of Estimates), an evidence-based alternative to p-values that assesses the probability an intervention had a meaningful effect, and provide examples of BASIE in action, including a simple spreadsheet tool. The webinar will be appropriate for people without any familiarity with Bayesian methods, as well as those with some knowledge who are interested in learning about the use of Bayesian methods in educational evaluations. There will be opportunity for Q+A at the end of the session.

John Deke is a senior fellow at Mathematica with 20 years of experience designing evaluations of education interventions. Mariel Finucane is a senior statistician at Mathematica who has led Bayesian analyses on evaluations spanning multiple fields, including health and education.

 

Proposing Cost and Cost-Effectiveness Analyses
July 10, 2020
Presenter: Brooks Bowden, University of Pennsylvania

Webinar Recording     Presentation Slides

This short session provides guidance and tips for those applying for IES research grants. The session builds upon basic knowledge of costs to provide examples of how to design a study to integrate a cost component and meet the quality standards set forth by the ingredients method (Levin et al., 2018) and the IES SEER standards. The session is tailored to the current IES RFA goal/research structure.

 

 

Designing Simulations for Power Analysis (and Other Things): 
A Hands-on Workshop Series Using R
Part 1: May 20 & 27, 2021
Part 2: June 3 & 10, 2021

Instructors: James E. Pustejovsky, University of Wisconsin - Madison & Luke Miratrix, Harvard University

Course Description: This course will cover how to design and program Monte Carlo simulations using R. Monte Carlo simulations are an essential tool of inquiry for quantitative methodologists and students of statistics, useful both for small-scale or informal investigations and for formal methodological research. As a practical example, simulations can be used to conduct power analyses for complex research designs such as multisite and cluster randomized trials (potentially with varying cluster sizes or attrition). Simulations are also critical for understanding the strengths and limitations of quantitative analytic methods. In many situations, more than one modeling approach is possible for addressing the same research question (or estimating the same target parameter). Simulations can be used to compare the performance of one approach versus another, which is useful for informing the design of analytic plans (such as plans included in pre-registered study protocols). As an example of the type of questions that researchers might encounter in designing an analytic plan: In analysis of a multi-site experiment, what are the benefits and costs of using a model that allows for cross-site impact variation?

This course will cover best practices of simulation design and how to use simulation to be a more informed and effective quantitative analyst. We will show how simulation frameworks allow for rapid exploration of the impact of different design choices and data concerns, and how simulation can answer questions that are hard to answer using direct computation (e.g., with power calculators or mathematical formula). Simulation can even give more accurate answers than “the math” in some cases! Consider algebraic formulas based on asymptotic approximations that might not “kick in” if sample sizes are moderate. This is a particular concern with hierarchical data structures that include 20-40 clusters, which is what is typically seen in many large-scale randomized trials in education research.

Course structure: Our course will consist of four webinars, each 1.5 hours in length, delivered over four weeks. We will begin by describing a set of general principles for designing simulations and demonstrating how to implement those principles with code. We will then dive into how to think about data generating processes as a core element of simulation. We will then give a standard recipe for designing and implementing multi-factor simulations (simulations that explore the role different factors all at once). We will illustrate this design and build process by walking through (and modifying) a simulation for conducting a power analysis for multisite experiments. In this case study we will discuss how to build simulations component-wise to keep things orderly, how to standardize one’s models to keep different scenarios comparable, and how to visualize results to interpret and present findings. We will also introduce parts of the “tidyverse,” a suite of packages that can greatly ease the coding burden of this type of work.  The course will be hands-on, with students running and modifying code to solve exercises throughout, so as to maximize the utility of the content. There will be small, optional “homework” assignments provided between the sessions, which will task involve studying, modifying, and adapting provided R code.

Prior experience needed: Students should have some familiarity with R. At the minimum, you should know how to load data, plot your data, and run linear regressions. Ideally, you should also be comfortable working in RStudio or another integrated development environment.

 

Researchers of Color Brown Bag - QuantCrit
June 15, 2021
Presenter:
Wendy Castillo, Princeton University

‘QuantCrit’ (Quantitative Critical Race Theory) is a rapidly developing approach that seeks to challenge and improve the use of statistical data in social research by applying the insights of Critical Race Theory (CRT). Scholars have adopted multiple different approaches to this task. This presentation of QuantCrit is intended to provide concrete strategies for more critical quantitative research and explore a range of questions that prompt users to be engaged critics, weighing the plausibility of the study, and questioning how the material was produced, analyzed, and presented.
‘QuantCrit’ (Quantitative Critical Race Theory) is a rapidly developing approach that seeks to challenge and improve the use of statistical data in social research by applying the insights of Critical Race Theory (CRT). Scholars have adopted multiple different approaches to this task. This presentation of QuantCrit is intended to provide concrete strategies for more critical quantitative research and explore a range of questions that prompt users to be engaged critics, weighing the plausibility of the study, and questioning how the material was produced, analyzed, and presented.
This session is the first in a series that will be held quarterly. The purpose of the brown bag series is to give researchers of color and allies a forum in which to gather, explore relevant issues, and network.